An inferior U.S. health care system made the COVID-19 pandemic worse

By Kent R. Kroeger (Source: NuQum.com; February 19, 2021)

A computer generated representation of COVID-19 virions (SARS-CoV-2) under electron microscope (Image by Felipe Esquivel Reed; Used under the CCA-Share Alike 4.0 International license.)

The U.S. may have experienced 7.7 million additional COVID-19 cases and 155 thousand additional COVID-19 deaths due to its subpar health care system.

This finding is based on a cross-national statistical analysis of 20 West European and West European-heritage countries using aggregate, country-level data provided by Johns Hopkins University (COVID-19 cases and deaths per 1 million people), OurWorldInData.org (Policy Stringency Index) and HealthSystemFacts.org (Health Access and Quality Index). The analysis covers the period from January 1, 2020 to February 5, 2021.

Figure 1 (below) shows the bivariate relationship between the number of COVID-19 cases (per 1 million people) and the quality of a country’s health care system as measured by the Health care Access and Quality Index (HAQ Index ) that was compiled during the 2016 Global Burden of Disease Study.

In countries where health care access is universal and of high quality, the performance on the number of COVID-19 cases per capita is much better. New Zealand, Australia, Iceland, Finland and Norway are positive exemplars in this regard. Israel, Portugal, U.S., and the U.K., in comparison, are not.

Figure 1: Health care access/quality (HAQ) and its relationship to COVID-19 cases per 1 million people

Image for post

More interestingly, if over the study period we control for the average level of COVID-19 policy actions (as measured by Oxford University’s COVID-19 Government Response Tracker (OxCGRT)) and whether or not a country is an island, the significance of the quality of a country’s health care system remains significant.

As seen in this simple linear regression model, three variables — the HAQ Index, COVID-19 Policy Stringency, and whether or not the country is an island — account for about 60 percent of the variance in COVID-19 cases per capita for these 20 countries.

Image for post

Using this model, we can estimate the number of COVID-19 cases (per 1 million people) the U.S. would have experienced if its health care system was as good as the countries rated as having the best health care systems in the world (Iceland and Norway — HAQ Index = 97).

(-2946.23 * 8)*330 = 7,778,000 additional COVID-19 cases

[Note: U.S. has approximately 330 million people and its HAQ Index = 89]

Additionally, as there is a strong relationship between the number of COVID-19 cases per capita and the number of COVID-19 deaths per capita (i.e., roughly 0.2 deaths per case — see Appendix), we can estimate that the U.S. has experienced 155,560 additional deaths as a result of inadequacies with its health care system.

The U.S. does not have the best health care system in the world

Remarkably little discussion within the national news media has been about the systemic problems within the U.S. health care system and how those problems contributed to the tragic COVID-19 numbers witnessed by this country during the pandemic. Where most of the media attention has been focused on political failures associated with the COVID-19 pandemic — and most of that has been directed at the Trump administration — the hard evidence continues to suggest systemic factors, such as racial disparities in socioeconomics and health, are driving U.S. COVID-19 case and death rates above other developed countries.

“Socioeconomically and racially segregated neighborhoods are more vulnerable and are more likely to be disproportionately impacted by the adverse effects of COVID-19,” conclude health analysts Ahmad Khanijahani and Larisa Tomassoni. As for why this is the case, Khanijahani and Tomassoni offer this explanation:

“Black and low-SES individuals in the US are more likely to be employed as essential workers in occupations such as food distribution, truckers, and janitors. Most of these jobs cannot be fulfilled remotely and usually do not offer adequate sick leaves. Additionally, individuals of low-SES and Black minority are disproportionately impacted by homelessness or reside in housing units with limited space that makes the practice of isolating infected family members challenging or impossible. Moreover, limited or no child/elderly care and higher uninsurance rates impose an additional financial burden on low-SES families (emphasis mine) making it challenging to stop working.”

The racial and ethnic disparities in COVID-19 death figures (in Figure 2) are shockingly apparent in the following graphic produced by the The Centers for Disease Control and Prevention (CDC).

Figure 2: Racial/ethnic disparities in COVID-19 deaths in the U.S. (Source: CDC)

Image for post

The gray bars in Figure 2 show how non-Hispanic Whites across all age categories have experienced fewer deaths than expected relative to their prevalence in the total U.S. population. In stark contrast, across all age groups, Hispanic and non-Hispanic Blacks account for a significantly higher percentage of COVID-19 deaths than expected based on their population numbers.

Figure 2 is what a broken health care system interacting with systemic racial and ethnic inequalities looks like in a chart.

Final thoughts

Citing the negative role of the U.S. health care system on COVID-19 outcomes is not an indictment of U.S. health care workers. To the contrary, because Americans live in a country where health care is significantly rationed by market forces — e.g., a relatively high rate of uninsured, people delaying preventative care, diagnoses and treatments due to financial concerns, etc. — our health care workers are forced to work harder as a high number of COVID-19 patients enter the health care system only after their symptoms have already become severe.

The awful impact of the COVID-19 pandemic on Americans is less a political failure than it is a systemic failure —though we cannot dismiss the ineptitude of politicians like New York Governor Andrew Cuomo who, presumably at the behest of his health policy experts, authorized sending critically ill seniors from hospitals to nursing homes in order to free up hospital beds. New York’s elderly paid a steep price so Governor Cuomo could learn that nursing homes are not hospitals.

More broadly, the COVID-19 pandemic exposed a broken and inadequate U.S. health care system that is better designed to protect high profit margins for insurance and pharmaceutical companies and the billing rates of physicians than it is to ensure the health of the American people.

Sadly, with the physician, health insurance and pharmaceutical lobbyists firmly entrenched in the policymaking machinery of the Biden administration, don’t expect any time soon the types of health care system reforms needed to fix our systemic problems.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Methodological Note:

The decision to analyze just 20 West European and West European-heritage countries (i.e., U.S., Israel, Canada, Australia, and New Zealand) was prompted by a desire to control for two factors that are significantly related to country-level COVID-19 outcomes: (1) cultural norms, and (2) economic development.

Controlling for cultural norms, particularly the differences between countries with individualistic cultures (i.e., Western European nations) and those with collectivist cultures (i.e., East Asian nations), facilitated a clearer look at the impact health care systems on COVID-19 outcomes.

As for the exclusion of lesser-developed countries from this analysis, I simply don’t trust the accuracy or completeness of their COVID-19 data.

APPENDIX — A Linear Model of COVID-19 Deaths (Per 1 Million People)

Image for post

Were proactive COVID-19 policies the key to success?

By Kent R. Kroeger (Source: NuQum.com; February 11, 2021)

There is no weenier way of copping out in data journalism (and social science more broadly) than posing a question in an article’s headline.

This intellectual timidity probably stems from the fact that most peer-reviewed, published social science research findings are irreproducible. In other words, social science research findings are more likely a function of bias and methodology than a function of reality itself.

As my father, a mechanical engineer, would often say: “Social science is not science.”

The consequence is that social science findings are too often artifacts of their methods and temporal context — so much so that a field like psychology has become a graveyard for old, discredited theories: Physiognomy (i.e., assessing personality traits through facial characteristics), graphology (i.e., assessing personality traits through handwriting), primal therapy (i.e., psychotherapy method where patients re-experience childhood pains), and neuro-linguistic programming (i.e., reprogramming behavioral patterns through language) are just a few embarrassing examples.

Indeed, established psychological theories such as cognitive dissonancehave proven to be such an over-simplification of behavioral reality that their practical and academic utility is debatable.

And what does this have to do with COVID-19? Not much. I’m just venting.

Well, not exactly.

The COVID-19 pandemic has unleashed a torrent of speculation and research about what COVID-19 policies work and don’t work.

For example, how effective are masks and mandatory mask policies?

Masks work, conclude most studies.

Another cross-national study, however, found that it is cultural norms that drive the effectiveness of mandatory mask policies.

And there are credible scientific studies that show the effectiveness of masks can be seriously compromised without other factors in place (namely, social distancing).

In part, the variation in findings on mask-wearing is a product of a healthy application of the scientific method. No single study can address every aspect of mask effectiveness.

Research is like a gestalt painting where a single study represents but a tiny part of the total picture. One must step back from the specific research findings of one study in order to understand the essence of all the research together.

In other words, masks work, but with some important caveats.

Some countries have done a better job than others at containing COVID-19

Among the largest, most developed economies, it is increasingly apparent which countries have been most effective at minimizing the impact of COVID-19.

According to the cumulative tally reported at RealClearPolitics.com, these 10 developed countries have the lowest COVID-19 deaths rates (per 1 million people) as of February 10, 2021:

Taiwan — 0 (deaths per 1 million)
China — 3
Singapore — 5
New Zealand — 5
Iceland — 6
South Korea — 29
Australia — 36
Japan — 52
UAE — 99
Norway — 111

On the other end of the scale, these 10 developed countries have the highest COVID-19 deaths rates (per 1 million people) as of February 10, 2021:

Belgium — 1,880 (deaths per 1 million)
UK — 1,712
Italy — 1,522
U.S. — 1,467
Portugal — 1,431
Spain — 1,350
Mexico — 1,335
Sweden — 1,210
France — 1,196
Switzerland — 1,139

If there are complaints about the validity of the data from China or Taiwan, I am all ears. However, regardless of their inclusion, an informal review of the other advanced countries suggests some patterns in their COVID-19 outcomes over time.

First, in fighting COVID-19, it helps to be on an island (Taiwan, Singapore, New Zealand, Iceland, Australia, and Japan). And for all practical purposes, I consider South Korea to have near-island status as few people enter that country by way of a land border.

Second, it appears European romance language countries such as Italy, Portugal, Spain, France (tourism perhaps?) and countries with subpar health care systems (such as the U.S. and Switzerland which rely disproportionately on insurance-based health care access) have not fared well in fighting COVID-19.

On an anecdotal level, at least, I’ve explained most of the high- and low-performing countries with respect to COVID-19 without even referencing their COVID-19 policies.

So what impact have COVID-19 policies had on containing this pandemic?

Though lacking a definitive answer that question on a specific policy level, in the aggregate, there are strong indications that changes in national COVID-19 policies have had, for a small subset of countries, at least one of two discernible relationships with weekly variation in new COVID-19 cases: A small number of countries have been proactive in their COVID-19 policies and the result has been relatively few COVID-19 deaths per capita. Another small set of countries have been reactive in their COVID-19 policies and their COVID-19 deaths per capita have been relatively higher in comparison to the proactive countries. As for most countries, they fall somewhere in the middle, as they are both proactive and reactive.

Before we look at the data, here is a conceptual perspective.

Figure 1 visualizes a theoretical framework for how national policies may relate to the spread of COVID-19 (see Figure 1). There are two axes: (1) The vertical axis represents the correlation between changes in COVID-19 policies and changes in weekly new cases of COVID-19, while the (2) second axis represents that relationship at various time lags.

In this construct, consider the weekly number of new COVID-19 cases to be our outcome variable (Y) and the stringency of national COVID-19 policies to be our input variable (X).

The intersection of the two axes represents the contemporaneous relationship between COVID-19 policies (X) and new COVID-19 cases (Y). As one proceeds to the left of center on the vertical axis, this represents the relationship between prior values of COVID-19 policy with future numbers of new COVID-19 cases (i.e., X causes Y). As one proceeds to the right of center on the vertical axis, this represents the relationship between prior numbers of new COVID-19 cases with future values of COVID-19 policy (Y causes X).

Figure 1: A Theoretical Framework for Understanding COVID-19 Policies

Figure 2 shows the practical implication of Figure 1: COVID-19 policies in some countries will generally follow the red line over time (reactive), while others the green line (proactive), and still others — probably the majority of countries — will follow the black line (a combination of reactive and proactive policy changes).

Figure 2: Proactive Policies versus Reactive Policies

In fact, these patterns did emerge across the 30 countries I analyzed when I plotted the cross correlations over time between changes in their COVID-19 policies and changes in the weekly number of new COVID-19 cases.

Country-level patterns in COVID-19 policy effectiveness

As demonstrably important as COVID-19 policies such as mask mandates or business lockdowns are to containing COVID-19, my curiosity is with an aggregate measure of those policies, as any single policy will not be enough to address something as pervasive as COVID-19.

As a result, I used country-level COVID-19 policy data from the Coronavirus Government Response Tracker (OxCGRT) compiled by researchers at the Blavatnik School of Government at the University of Oxford who aggregate 17 policy measures, ranging from containment and closure policies (such as such as school closures and restrictions in movement); economic policies; and health system policies (such as testing regimes), to create one summary measure: The COVID-19 Policy Stringency Index (PSI). Details on how OxCGRT collects and summarizes their policy data can be found in a working paper.

The outcome measure used here is the number of new COVID-19 cases reported by Johns Hopkins University every week from January 20, 2020 to February 5, 2021 for each of the following countries: Australia, Austria, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, Greece, Iceland, India, Indonesia, Ireland, Israel, Italy, Japan, Mexico, Netherlands, New Zealand, Norway, Portugal, Russia, South Africa, South Korea, Spain, Sweden, Switzerland, UK, and US.

It should be noted that the raw data from OxCGRT and Johns Hopkins were at the daily level, but was aggregated to the weekly level for data smoothing purposes.

In total, there were 50 data points for each of the 30 countries, and to address the relevance of the theoretical framework in Figure 2 a bivariate Granger causality test is employed for each country (an example of the R code used to generate this analysis is in the appendix).

The Results

Of the 30 countries in this analysis, bivariate Granger causality tests found only three in which prior increases in the stringency of COVID-19 policies (PSI) were significantly associated with decreases in the weekly numbers of new COVID-19 cases (Iceland, New Zealand, and Norway). Figures 3 through 5 show the cross correlation functions (CCF) for those countries in which their COVID-19 policies shaped events, instead of merely reacting to them. Hence, I label their COVID-19 policies as proactive.

In another six countries it was found that increases in the stringency of COVID-19 policies tended to follow increases in the weekly numbers of new COVID-19 cases (see Figures 6 through 9). In other words, their COVID-19 policies tended to follow events instead of shaping them. The policies in these countries are therefore labeled as reactive.

For the remaining 21 countries, the Granger causality tests revealed no significant relationships in either causal direction, though their CCF patterns tended to follow the S-curve shape posited in Figures 1 and 2. The lack of statistical significance in those cases could be a function of the limited samples sizes which was a product of aggregating the data to the weekly-level.

PROACTIVE COUNTRIES:

Figure 3: Cross Correlation Function — Iceland

Figure 4: Cross Correlation Function — New Zealand

Figure 5: Cross Correlation Function — Norway

REACTIVE COUNTRIES:

Figure 6: Cross Correlation Function — Germany

Figure 7: Cross Correlation Function — Israel

Figure 8: Cross Correlation Function — Switzerland

Figure 9: Cross Correlation Function — Austria

Proactive countries have had better COVID-19 outcomes

Figure 10 and 11 reveal how the COVID-19 outcomes in the proactive countries were significantly better than in the other countries. Countries that kept ahead of the health crisis did a better job of controlling the health crisis.

Figure 10: Confirmed COVID-19 Cases (per 1 million) by Policy Group

Figure 11: Confirmed COVID-19 Deaths (per 1 million) by Policy Group

Not coincidentally, many of the qualitative and quantitative analyses of worldwide COVID-19 policies have found Iceland, New Zealand, and Norway among the highest performers according to their metrics (examples are here, here and here).

Final Thoughts

In no way does my analysis suggest that COVID-19 policies in only three countries (Iceland, New Zealand, and Norway) were effective and the policies in the remaining countries were mere reactions to an ongoing health crisis they could not control.

Undoubtedly, there are well-documented examples of policy impotence across this worldwide pandemic. The lack of a consistent mask mandates in U.S. states like Arizona, North Dakota and South Dakota may help explain why those states have among the highest COVID-19 infection rates in the country. Sweden’s initial decision to keep their economy open during the early stages of the pandemic most likely explains their relatively high case and fatality rates relative to their Scandinavian neighbors.

But, in fairness, not every policy (or lack thereof) is going to work for a pathogen that has proven to be so pernicious. At the same time, as this pandemic winds down with the roll out of vaccinations, we are now seeing evidence in retrospect that a relatively small number of countries did do a better job than others in managing this pandemic. For the majority of countries, however, their policy leaders may have had frighteningly little impact on the ultimate course of this virus. Their citizens would have been better off moving to an island.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Appendix

R code used to generate the bivariate Granger causality test for Iceland:

y <- c(49.00,106.00,317.00,490.00,454.00,272.00,71.00,30.00,8.00,3.00,1.00,2.00,2.00,.00,2.00,7.00,5.00,10.00,3.00,5.00,5.00,50.00,62.00,44.00,59.00,42.00,36.00,26.00,145.00,294.00,271.00,588.00,538.00,396.00,471.00,198.00,123.00,83.00,102.00,105.00,76.00,69.00,62.00,71.00,126.00,76.00,25.00,21.00)
x <- c(16.67,16.67,48.02,53.70,53.70,53.70,53.70,53.70,53.70,50.53,50.00,50.00,41.27,39.81,39.81,39.81,39.81,39.81,39.81,39.81,39.81,41.66,46.30,46.30,46.30,46.30,46.30,39.15,37.96,37.96,37.96,40.34,39.15,42.99,46.43,52.78,52.78,52.78,52.78,52.78,52.78,52.78,52.78,52.78,50.40,46.82,44.44,44.44)
par8 = '4'
par7 = '0'
par6 = '1'
par5 = '1'
par4 = '1'
par3 = '0'
par2 = '1'
par1 = '1'
ylab = 'y'
xlab = 'x'
par8 <- '4'
par7 <- '0'
par6 <- '1'
par5 <- '1'
par4 <- '1'
par3 <- '0'
par2 <- '1'
par1 <- '1'

library(lmtest)
par1 <- as.numeric(par1)
par2 <- as.numeric(par2)
par3 <- as.numeric(par3)
par4 <- as.numeric(par4)
par5 <- as.numeric(par5)
par6 <- as.numeric(par6)
par7 <- as.numeric(par7)
par8 <- as.numeric(par8)
ox <- x
oy <- y
if (par1 == 0) {
x <- log(x)
} else {
x <- (x ^ par1 - 1) / par1
}
if (par5 == 0) {
y <- log(y)
} else {
y <- (y ^ par5 - 1) / par5
}
if (par2 > 0) x <- diff(x,lag=1,difference=par2)
if (par6 > 0) y <- diff(y,lag=1,difference=par6)
if (par3 > 0) x <- diff(x,lag=par4,difference=par3)
if (par7 > 0) y <- diff(y,lag=par4,difference=par7)
print(x)
print(y)
(gyx <- grangertest(y ~ x, order=par8))
(gxy <- grangertest(x ~ y, order=par8))
postscript(file="/home/tmp/1auwb1612990002.ps",horizontal=F,onefile=F,pagecentre=F,paper="special",width=8.3333333333333,height=5.5555555555556)
op <- par(mfrow=c(2,1))
(r <- ccf(ox,oy,main='Cross Correlation Function (raw data)',ylab='CCF',xlab='Lag (k)'))
(r <- ccf(x,y,main='Cross Correlation Function (transformed and differenced)',ylab='CCF',xlab='Lag (k)'))
par(op)
dev.off()
postscript(file="/home/tmp/2xjfo1612990002.ps",horizontal=F,onefile=F,pagecentre=F,paper="special",width=8.3333333333333,height=5.5555555555556)
op <- par(mfrow=c(2,1))
acf(ox,lag.max=round(length(x)/2),main='ACF of x (raw)')
acf(x,lag.max=round(length(x)/2),main='ACF of x (transformed and differenced)')
par(op)
dev.off()
postscript(file="/home/tmp/3zoqj1612990002.ps",horizontal=F,onefile=F,pagecentre=F,paper="special",width=8.3333333333333,height=5.5555555555556)
op <- par(mfrow=c(2,1))
acf(oy,lag.max=round(length(y)/2),main='ACF of y (raw)')
acf(y,lag.max=round(length(y)/2),main='ACF of y (transformed and differenced)')
par(op)
dev.off()

a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Granger Causality Test: Y = f(X)',5,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Model',header=TRUE)
a<-table.element(a,'Res.DF',header=TRUE)
a<-table.element(a,'Diff. DF',header=TRUE)
a<-table.element(a,'F',header=TRUE)
a<-table.element(a,'p-value',header=TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Complete model',header=TRUE)
a<-table.element(a,gyx$Res.Df[1])
a<-table.element(a,'')
a<-table.element(a,'')
a<-table.element(a,'')
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Reduced model',header=TRUE)
a<-table.element(a,gyx$Res.Df[2])
a<-table.element(a,gyx$Df[2])
a<-table.element(a,gyx$F[2])
a<-table.element(a,gyx$Pr[2])
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file="/home/tmp/4l57f1612990002.tab")
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,'Granger Causality Test: X = f(Y)',5,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Model',header=TRUE)
a<-table.element(a,'Res.DF',header=TRUE)
a<-table.element(a,'Diff. DF',header=TRUE)
a<-table.element(a,'F',header=TRUE)
a<-table.element(a,'p-value',header=TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Complete model',header=TRUE)
a<-table.element(a,gxy$Res.Df[1])
a<-table.element(a,'')
a<-table.element(a,'')
a<-table.element(a,'')
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'Reduced model',header=TRUE)
a<-table.element(a,gxy$Res.Df[2])
a<-table.element(a,gxy$Df[2])
a<-table.element(a,gxy$F[2])
a<-table.element(a,gxy$Pr[2])
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file="/home/tmp/5drvm1612990002.tab")

try(system("convert /home/tmp/1auwb1612990002.ps /home/tmp/1auwb1612990002.png",intern=TRUE))
try(system("convert /home/tmp/2xjfo1612990002.ps /home/tmp/2xjfo1612990002.png",intern=TRUE))
try(system("convert /home/tmp/3zoqj1612990002.ps /home/tmp/3zoqj1612990002.png",intern=TRUE))

Why is Hollywood failing with its re-branded science fiction and superhero franchises?

Photo by Tomas Castelazo — Own work, CC BY-SA 3.0

By Kent R. Kroeger (Source: NuQum.com; February 4, 2021)]

In April 1985, the Coca-Cola Company, the largest beverage company in the world, replaced their flagship beverage, Coca-Cola, with New Coke — a soda drink designed to match the sugary sweetness of Coca-Cola’s biggest competitor, Pepsi.

At the time, Pepsi was riding a surge in sales, fueled by two marketing campaigns: The first campaign was a clever use of blind taste tests, called the “Pepsi Challenge,” and through which Pepsi claimed most consumers preferred the taste of Pepsi over Coca-Cola. The second, called the “The Pepsi Generation” campaign, featured the most popular show business personality at the time, Michael Jackson. Pepsi’s message to consumers was clear: Pepsi is young and cool and Coca-Cola isn’t.

Hence, the launch of New Coke — which, to this day, is considered one of the great marketing and re-branding failures of all time. Within weeks of New Coke’s launch it was clear to Coca-Cola’s senior management that their loyal customer base — raised on the original Coca-Cola formula — was not going to accept the New Coke formula. Hastily, the company would re-brand their original Coca-Cola formula as Coca-Cola Classic.

Coca-Cola’s public relations managers tried to retcon the whole New Coke-Classic Coke story to make it appear the company planned to launch Coca-Cola Classic all along — but most business historians continue to describe New Coke as an epic re-branding failureNew Coke was discontinued in July 2002.

What did Coca-Cola do wrong? First, it never looks for good for a leader to appear too reactive to a rising competitor. On a practical level, for brands to lead over long periods they must adapt to changing consumer tastes — but there is a difference between ‘adapting’ and ‘panicking.’ Coca-Cola panicked.

But, in what may have been Coca-Cola’s biggest mistake, they failed to understand the emotional importance to their loyal customers of the original Coca-Cola formula.

“New Coke left a bitter taste in the mouths of the company’s loyal customers,” according to the History Channel’s Christopher Klein. “Within weeks of the announcement, the company was fielding 5,000 angry phone calls a day. By June, that number grew to 8,000 calls a day, a volume that forced the company to hire extra operators. ‘I don’t think I’d be more upset if you were to burn the flag in our front yard,’ one disgruntled drinker wrote to company headquarters.”

Prior to New Coke’s roll out, Coca-Cola did the taste-test research (which showed New Coke was favored over Pepsi), but they didn’t understand the psychology of Coca-Cola’s most loyal customers.

“The company had underestimated loyal drinkers’ emotional attachments to the brand. Never did its market research testers ask subjects how they would feel if the new formula replaced the old one,” according to Klein.

Is Hollywood Making the Same Mistake as Coca-Cola?

Another term for ‘loyal customer’ is ‘fan.’ In the entertainment industry, fans represent a franchise’s core audience. They are the first to line up at a movie premiere or stream a TV show when it becomes available. They’ll forgive an occasional plot convenience or questionable acting performance, as long as they can still recognize the characters, mood and narratives that make up the franchise they love.

Star Trek fans showed up in command and science officer-colored swarms for 1979’s Star Trek: The Motion Picture, an extremely boring, almost unwatchable film, according to many Trek fans. Yet, they still showed up for 1982’s Star Trek: The Wrath of Khan (a far better film) even when the casual Trek audience didn’t — helping make the Star Trek “trilogy” films (The Wrath of Khan, The Search for SpockThe Voyage Home) among the franchise’s most successful.

No superhero franchise has endured as many peaks and valleys in quality as Batman, a campy TV show in my youth, but a significant box office event with Tim Burton’s Batman (1989). Unfortunately, the franchise descended into numbing mediocrity after Burton, reaching a creative depth with 1997’s Batman & Robin, only to exceed the critical acclaim of the Burton-era films with Christopher Nolan’s Batman trilogy movies in the 2000s. Through all of this, Batman movies make money…most of the time.

This phenomenon is common to a lot of science fiction and superhero franchises: Star Wars, Superman, Spider-Man, Doctor Who, Alien, and The Terminator. among others. They are not consistently great, but they almost always bring out a faithful fan base.

That is, until they don’t.

Three major science fiction franchises have undergone significant re-branding efforts in the past five years, in the understandable hope of building a new, younger, and more diverse fan base for these long-time, successful franchises — not too dissimilar from what Coca-Cola was trying to do in the mid-1980s:

Star Wars

Now owned by Disney, Star Wars had its canon significantly altered in the three Disney trilogy movies from the original George Lucas-led Star Wars movies when the heroic stature of its two most iconic male characters — Luke Skywalker and Han Solo — was unceremoniously diminished in favor of new characters (Rey, Finn, and Poe Dameron). If Disney had a customer complaint line, it would have been overwhelmed after the first trilogy movie, The Force Awakens, and shutdown after The Last Jedi.

Result: Disney made billions in box office receipts from the trilogy movies, but it is hard to declare these movies an unqualified success. Yes, the movies made money, but Disney designs movies as devices for generating stable (and profitable) revenue streams across a variety of platforms (amusement park attendance, spin-off videos, toy sales, etc.). The Disney trilogy has generated little apparent interest in sequel films. At the Walt Disney Company’s most recent Investor Daylast Decemberwhichfeatured announcements for future Star Wars TV and movie projects, not one of the new projects involved characters or story lines emanating from the trilogy movies. More telling, pre-pandemic attendance at the new Star Wars-themed Galaxy’s Edge parks at Disney World and Disneyland have seen smaller-than-expected crowds — and to make matters worse, Star Wars merchandise sales have been soft since the trilogy roll out. Be assured, these outcomes are not part of Disney’s Star Wars strategy.

Star Trek

The Star Trek franchise has launched two new TV shows through Paramount/CBS in the past three years: Star Trek: Discovery and Star Trek: Picard. Through three seasons of Discovery and one for Picard, the re-branded Star Trek has turned the inherent optimism of Gene Roddenberry’s original Star Trek vision into a depressing, dystopian future. Starfleet, once an intergalactic beacon for inclusiveness, integrity and justice, is now a bureaucratic leviathan filled with corruption and incompetence. To further distance the new Star Trek from the original Star Trek series (TOS), Discovery’s writers concocted an incomprehensible plot twist — the Seven Red Signals — in order to send the Discovery’s crew 900 years into the future, past the TOS and Star Trek: The Next Generation timelines, thereby freeing the new series from the shackles of previous Star Trek canon.

Result: As Picard has had only one season, I will focus on Discovery, which has hadthree seasons on CBS’s All Access streaming platform (though only one on the broadcast network). In its 2017 series premiere, Discovery reportedly attracted 9.6 million viewers on the CBS broadcast network before the show was transitioned to the streaming service. Parrot Analytics subsequently reported Discovery was the second most streamed TV show in Summer 2017 (after Netflix’ Ozark), with 12.6 million average demand expressions, and was first for the week of October 5 through 11, with over 53 million average demand expressions.

Not a bad start, but by moving Discovery to the broadcast side this past year, CBS apparently signaled the show wasn’t building enough audience interest on the streaming service to offset the ad revenue losses from not putting it on the broadcast network — or, at least, that is how some TV insiders are interpreting the move. But, given Discovery’s broadcast ratings for the first season, it is unlikely the show is inundating the network in increased ad revenues either. It’s “linear” premiere broadcast on September 24, 2020 attracted 1.7 million viewers, placing it 8th out of the 12 broadcast network shows on that night — a bad start which has not improved over the next 13 episodes. [The most recent episode, broadcast on January 28, 2021, brought in 1.8 million viewers.]

Nonetheless, perhaps Discovery is at least attracting a new, younger audience for Star Trek? Uh, nope. Consistently, the show has achieved around a 0.2 rating within the 18–49 demo, which translates to about 280,000 people out of the 139 million Americans in that age group. That means the remaining 1.4 million Discovery viewers are aged 50 or older — in other words, old Star Trek nerds like me. How ironic would it be if it were the franchise’s original series fans that saved Discovery from cancellation, despite the show’s apparent attempts to distance itself from those same fans?

Doctor Who

No re-branding effort has broken my heart more than the decline of the BBC’s Doctor Who under showrunner Chris Chibnall’s leadership. The oldest science fiction series still on television feels irreparably damaged with its underdeveloped companion characters, generally poor scripts, and grade school level political sermons. The net result? The last two seasons featuring the 13th Doctor, played gamely by Jodi Whittaker, are more often boring than entertaining or thought-provoking.

But most regrettably, to lifelong fans who have loved the show since its first Doctor (played by William Hartnell), the BBC and Chibnall have taken the show’s long established canon, stuffed it in a British Army duffel bag, and thrown it in the River Thames to drown. And how did they do that? By rewriting the Doctor’s origin story — a Time Lord exiled from his home world of Gallifrey — in the fifth (“Fugitive of the Judoonand twelfth (“The Timeless Children”) episodes in the 13th Doctor’s second season, to where now the first Doctor is actually a previously unknown woman named Ruth Clayton and the ability of Doctors (Time Lords) to regenerate is now initially derived from a sadistic experiment on a small child who was the first living being found to have regenerate powers. If this story wasn’t so stupid, it would be sick.

Chibnall’s re-telling of the Doctor’s origin story was a WTF! moment for a lot of Whovians (the name often given to Doctor Who fans). But not a WTF! moment in the entertaining sense (like when half of The Avengers dissolved at the end of Infinity War), but in the bad sense.

Chibnall would have inspired no more controversy if he had gone back and rewritten Genesis 1:1 to read: “In the beginning Hillary Clinton created the heaven and the earth.” Such rewrites have only one purpose: to piss off people emotionally attached to the original story.

And that is exactly what the BBC and Chibnall have done — and many Doctor Who fans (though, as yet, not me) have responded by abandoning the franchise.

Result: The TV ratings history for the 13th Doctor’s two seasons reveals the damage done, though there was hope at the beginning. The 13th Doctor’s first episode on October 7, 2018, pulled in 10.96 million viewers — a significant improvement over the previous Doctor’s final season ratings which never exceeded 7.3 million viewers for an individual episode. However, in a near monotonic decline, the 13th Doctor’s latest episode (and last of the 2020 season) could only generate 4.69 million viewers, an all-time low since the series reboot in 2005.

And why did Doctor Who lose 6.3 million viewers? Because the BBC (through Chibnall) wanted Doctor Who to be more tuned to the times. They wanted a younger, more diverse, more socially enlightened audience for their show. Doctor Who was never cool enough. In fact, the original Doctor Who series was always kind of silly and escapist — a condition completely unacceptable in today’s political climate, according to the big heads at the BBC. Doctor Who needed to be relevant, so it became the BBC’s version of New Coke.

Except the BBC’s new Doctor Who is New Coke only if New Coke had tasted like windshield wiper fluid. From Chibnall’s pointed pen, the show has aggressively (and I would add, vindictively) alienated fans by marginalizing its original story.

Perhaps the Chibnall narrative is a objectively a better one. Who am I to say it isn’t? But the answer to that question doesn’t matter to Whovians who are deeply connected to the pre-Chibnall series. Whovians have left the franchise in the millions and unless the BBC has already concocted a Coca-Cola Classic-like response, I don’t see why they will come back.

Have TV and Movie Studios Forgotten How to Do Market Research?

The lesson from Star Wars, Star Trek and Doctor Who is NOT that brands shouldn’t change over time or that canon is sacrosanct and any deviations are unacceptable. Brands must adapt to survive.

All three of these franchises need a more diverse fan base to stay relevant and that starts with attracting more women and minorities into the fold. But how these franchises tried to evolve has been a textbook example on how not to do it.

In my opinion, it starts with solid writing and good storytelling, which requires better developed characters and more compelling narratives. Harry Potter is the contemporary gold standard. My personal favorite, however, is Guardians of the Galaxy — a comic book series I ignored as a kid, but in cinema form I love. Director/Writer James Gunn has offered us a master class at creating memorable characters, such as Nebula, Gamora, Drax, Peter Quill, Mantis, Rocket, and Groot. So much so, that a few plot holes now and then are quickly forgiven — not so with the re-branded Star Wars, Star Trek and Doctor Who.

Along with better writing, these three franchises have been poorly managed at the business level — and that starts with market research. Disney, Paramount and the BBC have demonstrated through their actions that they do not know their existing customers, much less how to attract new ones.

As a 2020 American Marketing Association study warns businesses:

“Any standout customer experience starts with figuring out the ‘what’ and working backwards to design, develop and deliver products and services that customers use and recommend to others. But how effective are marketing organizations at understanding “what” customers are looking for and ‘why’?”

The AMA’s answer to that last question was that most businesses — 80 percent by their estimate — do not understand the ‘what’ and ‘whys’ behind their current and potential customers’ motivations.

Does that mean these franchises would have been better off just engaging in slavish fan service? Absolutely not.

Fans are good at spending their money to watch their favorite movies and TV shows. They are not creative writers. Few people in contemporary marketing reference anymore the old trope of the “customer always being right,” as experience has taught companies and organizations that customers don’t always know what they want, much less know what they need. As Henry Ford is quoted as saying, “If I’d asked people what they wanted, they’d have asked for faster horses” [thoughit is not clear that he ever said that or would have believed it].

Instead, modern marketers tend to focus on understanding the customer experience and mindset in an effort to strategically differentiate their brands. Central to that process, the best organizations depend heavily on sound, objective research to answer key questions about their current and prospective customers.

There was a time when the entertainment industry was no different in its reliance on consumer data and feedback to shape product and distribution. The power of one particular market research firm, National Research Group (NRG), to determine movie release dates or whether a movie even gets released in theaters is legend in Hollywood. Though now a part of Nielsen Global Media (another research behemoth that has probably done as much to shape what we watch as NBC or CBS ever have), early in its existence the NRG was able to get the six major movie studios to sign exclusivity agreements granting NRG an effective monopoly on consumer-level information regarding upcoming movies. If you can control information, you can control people (including studio executives).

But something has happened in Hollywood in the past few years with respect to science fiction and superhero audiences (i.e., customers) who are perceived by many in Hollywood — wrongfully, I might add — as being predominately white males.

While men have long been over-represented among science fiction and fantasy writers — and that is a problem — the audience for these genres are more evenly divided demographically than commonly assumed.

For certain, the research says science fiction moviegoers skew young and male, but that is a crude understanding of science fiction fandom. Go to a Comicon conference and one will see crowds almost evenly divided between men and women and drawn from most races, ethnicities and age groups — though my experience has been that African-Americans are noticeably underrepresented (see photo below).

Image for post
Comic-Con 2010 — the Hall H crowd (Photo by The Conmunity — Pop Culture Geek; used under CCA 2.0 Generic license.)

Similarly, a 2018 online survey of U.S. adults found that, while roughly three-quarters of men like science fiction and fantasy movies (76% and 71%, respectively), roughly two-thirds of women also like those movie genres (62% and 70%, respectively). The same survey also found that white Americans are no more likely to prefer these movie genres than Hispanic, African-American or other race/ethnicities.

Methodological Note:

Total sample size for this survey of U.S. moviegoers was 2,200 and the favorite genre results are based on the percentage of respondents who had a ‘very’ or ‘somewhat favorable’ impression of the movie genre.

The ‘white-male’ stereotyping of science fiction fans, so common within entertainment news stories about ‘toxic’ fans, also permeates descriptions of the gaming community, an entertainment subculture that shares many of the same franchises (Star Wars, The Avengers, The Witcher, Halo, etc.) popular within the science fiction and fantasy communities. Despite knowing better, when I think of a ‘gamer,’ I, too, think of people like my teenage son and his male cohorts.

Yet, in a 2008 study, Pew Research found that self-described “gamers” were 65 percent male and 35 percent female, and in 2013 Nintendo reported that its game-system users were evenly divided between men and women.

Conflating “white male” stereotypes of science fiction fans with “toxic” fans serves a dark purpose within Hollywood: The industry believes it can’t re-brand some of its most successful franchises without first destroying the foundations upon which those franchises were built.

In the process of doing that, Hollywood has ignited a war with some of its most loyal customers who have been

labeled within the industry as “toxic fans.”

A Cold War between Science Fiction Fans and Hollywood Continues

The digital cesspool — otherwise known as social media — can’t stop filling my inbox with stories about how “woke” Hollywood is vandalizing our most cherished science fiction and superhero franchises (Star Wars, Star Trek, Batman and Doctor Who, etc.) or how supposedly malevolent slash racist/sexist/homophobic/transphobic fans are bullying those who enjoy the recent “re-imaginings” of these franchises. All sides are producing more noise than insight.

In the midst of these unproductive, online shouting matches, there is real data to suggest much of the criticism of Disney’s Star Wars trilogy (The Force AwakensThe Last Jedi, and The Rise of Skywalker), CBS’ new Star Trek shows and theBBC’s 13th iteration of Doctor Who isrooted in genuine popularity declines within those franchises.

I produced the following chart in a previous post about the impact of The Force Awakens on interest in the Star Wars franchise:

Figure 1: Worldwide Google searches on ‘Star Wars’ from January 2004 to May 2020

Image for post
Source: Google Trends

The finding was that the The Force Awakens failed to maintain the interest in Star Wars it had initially generated, as evidence by the declining “trend in peaks” with subsequent Disney Star Wars films.

A similar story can be told for some of TV’s most prominent science fiction and superhero franchises.

The broadcast TV audience numbers are well-summarized at these links: Star Trek: DiscoveryDoctor WhoBatwoman (Season 1 & Season 2).

Bottom line: Our most enshrined science fiction and superhero franchises are losing audiences fast.

‘The Mandalorian’ Offers Hope on How To Re-Brand a Franchise

Whether these audience problems are due to bad writing, bad marketing, negative publicity caused by a small core of “toxic” fans, or just unsatisfied fan bases are an open dispute. What can be said with some certitude is that these franchises have underwhelmed audiences in their latest incarnations, with one exception…Disney’s Star Wars streaming series: The Mandalorian.

Methodological Note:

In a previous post I’ve shown that Google search trends strongly correlate with TV streaming viewership: TV shows that generate large numbers of Google searches tend to be TV shows people watch. A similar relationship has been shown to exist between movie box office figures and Google search trends.

Figure 2 shows the Google search trends since September 2019 for Disney’s The Mandalorian. Over the two seasons the show has been available on Disney+ (Season 1: Nov. 12 — Dec. 27, 2019; Season 2: Oct. 30 — Dec. 18, 2020), intraseasonal interest in the show has generally gone up with each successive episode, with the most interest occurring for the season’s final episode. This “rising peaks” phenomenon — indicative of a well-received and successful TV or movie series — was particularly evident in The Mandalorian’s second season where characters very popular among long-time fans periodically emerged over the course of the season: Boba Fett, Bo-Katan, Ashoka Tano, and (of course) Luke Skywalker.

Figure 2: Google search trends for Disney’s The Mandalorian (Sept. 2019 to Feb. 2021)

Image for post
Source: Google Trends

It has only been two seasons, but The Mandalorian’s creative leaders — Jon Favreau and Dave Filoni — have been able to maintain steady audience interest, though they run the risk of eating their seed corn with the frequent fan-favorite character roll outs. It will not take long for them to run out of cherished and widely-known Star Wars characters. [Jon/Dave, I love Shaak Ti, but what are the chances she will ever come back?]

Nonetheless, The Mandalorian stands in stark contrast to some other science fiction and superhero franchises who have struggled to build their audiences in the past five years.

Figure 3 shows four TV shows with declining intraseasonal and/or interseasonal peaks. We’ve discussed Discovery and Doctor Who’s audience problems above, but Supergirl deserves some particular attention as my son and I watched the show faithfully through the first four seasons.

Figure 3: Google search trends for Star Trek: Discovery, Doctor Who, Batwoman, and Supergirl (Feb. 2016 to Feb. 2021)

Image for post
Source: Google Trends

Supergirl, whose title character is played bya mostcharmingMelissa Benoist, debuted on October 26, 2015 on CBS and averaged 9.8 million viewers per episode in its freshman season, making it the 8th most watched TV show for the year — a solid start.

Image for post
Melissa Benoist speaking at the 2019 San Diego Comic Con International (Photo by Gage Skidmore; Used under CCA-Share Alike 2.0 Generic license.)

Regrettably, CBS moved the show to its sister network, The CW, where it experienced an immediate drop of 6.5 million viewers in its first season there and another 1.5 million viewers over the next three seasons. [Supergirl has since been cancelled.]

How did this happen?

Before blaming the show’s overt ‘wokeness’ — women were always competent, while white men were either evil (Jon Cryer’s Lex Luther was magnificent though) or lovesick puppies — the spotlight must be turned on the CBS executives who decided to anchor the show to The CW’s other superhero shows (The Flash and Arrow) in an effort to help the flagging sister network. Their attempt failed and Supergirl paid the price.

At the same time, Supergirl didn’t build on its smaller CW audience and that problem rests on the shoulders of the show’s creative minds, particularly Jessica Queller and Robert Rovner, who were the showrunners after the second season.

First, what happened to Superman? Supergirl’s cousin, Kal-El, was prominent in the second season, but then disappeared in Season 3 (apparently, he had a problem to solve in Madagascar caused by Reign, an existential threat to our planet who Supergirl happened to be fighting at the time). Superman couldn’t break away to help his cousin?

I watched the Supergirl TV show because I was a fan of her DC comics in childhood and few realize that Supergirl comics, at least among boys, were more popular than Wonder Woman’s in the 1960s, according to comics historian Peter Sanderson. And central to her story was always her cousin, Superman. But for reasons seemingly unrelated to the sentiments of the show’s fans, Superman’s appearances after Season 2 were largely limited to the annual Arrowverse crossover episodes.

Fine, Supergirl’s showrunners wanted the show to live or die based on the Supergirl character, not Superman’s. I get it. But a waste of one of the franchise’s greatest assets.

Second, the script writing on Supergirl changed noticeably after Season 2, with story lines mired in overly convenient plot twists (M’ymn, the Green Martian, gives up his life and merges with the earth to stop Reign’s terraforming the planet? How does that work? We’ll never know.), and clunky teaching moments on topics ranging from gun control to homosexuality. Instead of being a lighthearted diversion from the real world, as it mostly was in its first two seasons, the show’s writers thought it necessary to repurpose MSNBC content. Supergirl stopped being fun.

What Lessons are Learned?

The most important lesson from Coca-Cola’s New Coke blunder was that mistakes can be rectified, if dealt with promptly and earnestly. It is OK to make mistakes. You don’t even have to admit them. But you do have to address them.

In 1985, the year of New Coke’s introduction, Coca-Cola’s beverage lines owned 32.3 percent of the U.S. market to Pepsi’s 24.8 percent. Today, Coca-Cola owns 43.7 percent of the non-alcoholic beverage market in the U.S., compared to Pepsi’s 24.1 percent.

With The Mandalorian’s success, Hollywood may still realize that burning down decades of brand equity earned by franchises such as Star Wars, Star Trek and Doctor Who is not a sound business plan. The good news is, it is not too late to make amends with the millions of ardent fans who have supported these franchises through the good, the bad and the Jar Jar Binks. That Star Wars fans can now laugh about Jar Jar is proof of that.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The most important moment in human history may have passed without much notice

Parkes Radio Telescope in Australia which detected possible extraterrestrial signals from Proxima Centauri last year (Photo by Maksym Kozlenko; Used under CCA-Share Alike 4.0 International license.)

By Kent R. Kroeger (Source: NuQum.com; January 25, 2021)

Some background music while you read ==> Undiscovered Moon (by Miguel Johnson)

Shane Smith, an intern in the University of California at Berkeley’s Search for Extraterrestrial Intelligence (SETI) program, was the first to see the anomaly buried in petabytes of Parkes Radio Observatory data.

It was sometime in October of last year, the start of Australia’s spring. when Smith found a strange, unmodulated narrowband emission at 982.002 megahertz seemingly from Proxima Centauri, our Sun’s closest star neighbor.

While there have been other intriguing radio emissions — 1977’s “Wow” signal being the most famous — none have offered conclusive evidence of alien civilizations. Similarly, the odds are in favor of the Parkes signal being explained by something less dramatic than extraterrestrial life; but, as of now, that has not happened.

“It has some particular properties that caused it to pass many of our checks, and we cannot yet explain it,” Dr. Andrew Siemion. director of the University of California, Berkeley’s SETI Research Center, told Scientific American recently. “We don’t know of any natural way to compress electromagnetic energy into a single bin in frequency,” Siemion says. “For the moment, the only source that we know of is technological.”

Proof of an extraterrestrial intelligence? No, but initial evidence offering the intriguing possibility? Why not. And if another radio telescope were to also detect this tone at 982.002 megahertz coming from Proxima Centauri, a cattle stampede of conjecture would likely erupt.

As yet, however, the scientists behind the Parkes Radio Telescope observations have not published the details of their potentially momentous discovery, as they still contend, publicly, that the most likely explanation for their data is human-sourced.

“The chances against this being an artificial signal from Proxima Centauri seem staggering,” says Dr. Lewis Dartnell, an astrobiologist and professor of science communication at the University of Westminster (UK).

Is there room for “wild speculation” in science?

We live in a time when being called a “conspiracy theorist” is among the worst smears possible, not matter how dishonest or unproductive the charge. How dare you not agree with consensus opinion!

However science, presumably, operates above the daily machinations of us peons. How could any scientist make a revolutionary discovery if not by tearing down consensus opinion? Do you think when Albert Einstein published his relativity papers he was universally embraced by the scientific community? Of course not.

“Sometimes scientists have too much invested in the status quo to accept a new way of looking at things,” says writer Matthew Wills, who studied how the scientific establishment in Einstein’s time reacted to his relativity theories.

But just because scientists cannot yet explain the Parkes signal doesn’t mean the most logical conclusion should be “aliens.” There are many less dramatic explanations that also remain under consideration.

At the same time, we need to prepare ourselves for the possibility the Parkes signal cannot be explained as a human-created or natural phenomenon.

“Extraordinary claims require extraordinary evidence.” — Astrophysicist Carl Sagan’s rewording of Laplace’s principle, which says that “the weight of evidence for an extraordinary claim must be proportioned to its strangeness”

“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” — Sir Arthur Conan Doyle, stated by Sherlock Holmes.

The late Carl Sagan was a scientist but became famous as the host of the PBS show “Cosmos” in the 1980s. Sherlock Holmes, of course, is a fictional character conceived by Sir Arthur Conan Doyle. It should surprise few then that Sagan’s quote about ‘extraordinary claims’ aligns comfortably with mainstream scientific thinking, while the Holmes quote is referred to among logicians and scientific philosophers as the process of elimination fallacy — when an explanation is asserted as true on the belief that all alternate explanations have been eliminated when, in truth, not all alternate explanations have been considered.

If you are a scientist wanting tenure at a major research university, you hold the Sagan (Laplace) quote in high regard, not the Holmesian one.

The two quotes encourage very different scientific outcomes: Sagan’s biases science towards status quo thinking (not always a bad thing), while Holmes’ aggressively pushes outward the envelope of the possible (not always a good thing).

Both serve an important role in scientific inquiry.

Oumuamua’s 2017 pass-by

Harvard astronomer Dr. Abraham (“Avi”) Loeb, the Frank B. Baird Jr. Professor of Science at Harvard University, consciously uses both quotes when discussing his upcoming book, “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth,” as he recently did on the YouTube podcast “Event Horizon,” hosted by John Michael Godier.

His book details why he believes that the first observed interstellar object in our solar system, first sighted in 2017 and nicknamed Oumuamua (the Hawaiian term for ‘scout’), might have been created by an alien civilization and could be either some of their space junk or a space probe designed to observe our solar system, particularly the third planet from the Sun. For his assertion about Oumuamua, Dr. Loeb has faced significant resistance (even ridicule) from many in the scientific community.

Canadian astronomer Dr. Robert Weryk calls Dr. Loeb’s alien conclusion “wild speculation.” But even if Dr. Wertk is correct, what is wrong some analytic provocation now and then? Can heretical scientific discoveries advance without it?

Dr. Loeb, in turn, chides his critics right back for their lack of intellectual flexibility: “Suppose you took a cell phone and showed it to a cave person. The cave person would say it was a nice rock. The cave person is used to rocks.”

Hyperbolic trajectory of ʻOumuamua through the inner Solar System, with the planet positions fixed at the perihelion on September 9, 2017 (Image by nagualdesign — Tomruen; Used under under the CCA-Share Alike 4.0 International license.)

Dr. Loeb’s controversial conclusion about Oumuamuaformed soonafter it became apparent that Oumuamua’s original home was from outside our solar system and that its physical characteristics are unlike anything we’ve observed prior. Two characteristics specifically encourage speculation about Oumuamua’s possible artificial origins: First, it is highly elongated, perhaps a 10-to-1 aspect ratio. If confirmed, it is unlike any asteroid or comet ever observed, according to NASA. And, second, it was observed accelerating as it started to exit our solar system without showing large amounts of dust and gas being ejected as it passed near our Sun, as is the case with comets.

In stark contrast to comets and other natural objects in our solar system, Oumuamua is very dry and unusually shiny (for an asteroid). Furthermore, according to Dr. Loeb, the current data on its shape cannot rule out the possibility that it is flat — like a sail — though the consensus view remains that Oumuamua is long, rounded (not flat) and possibly the remnants of a planet that was shredded by a distant star.

I should point out that other scientists have responded in detail to Dr. Loeb’s reasons for suggesting Oumuamua might be alien technology and an excellent summary of those responses can be found here.

Place your bets on whether the Parkes signal and/or Oumuamua are signs of alien intelligence

What are the chances Oumuamua or the Parkes signal are evidence of extraterrestrial intelligent life?

If one asks mainstream scientists, the answers would cluster near ‘zero.’ Even the scientists involved in discovering the Parkes signal will say that. “The most likely thing is that it’s some human cause,” says Pete Worden, executive director of the Breakthrough Initiatives, the project responsible for detecting the Parkes signal. “And when I say most likely, it’s like 99.9 [percent].”

In 2011, radiation from a microwave oven in the lunchroom at Parkes Observatory was, at first, mistakenly confused with an interstellar radio signal. These things happen when you put radiation sources near radio telescopes looking for radiation.

Possibly the most discouraging news for those of us who believe advanced extraterrestrial intelligence commonly exists in our galaxy is a recent statistical analysis published in May 2020 in the Proceedings of the National Academy of Sciences by Dr. David Kipping, an assistant professor in Columbia University’s Department of Astronomy.

In his paper, Dr. Kipping employs an objective Bayesian analysis to estimate the odds ratios for the early emergence of life on an alien planet and for the subsequent development of intelligent life. Since he only had a sample size of 1 — Earth — he used Earth’s timeline for the emergence of early life (which occurred about 500 million years after Earth’s formation) and intelligent life (which took another 4 billion years) to run a Monte Carlo simulation. In other words, he estimated how often elementary life forms and then intelligent life would emerge if we repeated Earth’s history many times over.

Dr. Kipping’s answer? “Our results find betting odds of >3:1 that abiogenesis (the first emergence of life) is indeed a rapid process versus a slow and rare scenario, but 3:2 odds that intelligence may be rare,” concludes Dr. Kipping.

Put differently, there is a 75 percent change our galaxy is full of low-level life forms that formed early in a planet’s history, but a 60 percent chance that human-like intelligence is quite rare.

Dr. Kipping is not suggesting humans are alone in the galaxy, but his results suggest we are rare enough that to have a similarly intelligent life form living in our nearest neighboring solar system, Proxima Centauri, is unlikely.

What a killjoy.

My POOMA-estimate of Extraterrestrial Intelligent Life

I want to believe aliens living in the Proxima Centauri system are broadcasting a beacon towards Earth saying, in effect, “We are over here!” I also want to believe Oumuamua is an alien probe (akin to our Voyager probes now leaving the confines of our solar system).

If either is true (and we may never know), it would be the biggest event in human history…at least until the alien invasion that will follow happens.

Both events leave me with questions: If Oumuamua is a reconnaissance probe, shouldn’t we have detected electromagnetic signatures suggesting such a mission? [It could be a dead probe.] And in the case of the Parkes signal, if a civilization is going to go to the trouble of creating a beacon signal (which requires a lot of energy directed at a specific target in order to be detectable at great distances), why not throw some information into the signal? Something like, “We are Vulcans. What is your name?” or “When is a good time for us to visit?” And why do these signals never reappear? [At this writing, there have been no additional narrowband signals detected from Proxima Centauri subsequent to the ones found last year.]

Given the partisan insanity that grips our nation and the fear-mongering meat heads that overpopulate our two political parties, we would be well-served by a genuine planetary menace. We would all gain some perspective. And I’ll say it now, in case they are listening: I, for one, welcome our new intergalactic overlords (Yes, I stole that line from “The Simpsons).

In the face of an intergalactic invasion force, we may look back and realize that a lot of the circumstantial evidence of extraterrestrial life we had previously dismissed as foil-hat-level speculation was, in reality, part of a bread crumb trail to a clearer understanding of our place in the galactic expanse.

So, are Oumuamua and the possible radio signal from the Proxima Centauri system evidence of advanced extraterrestrial life? In isolation, probably not. But I wonder what evidence we have overlooked because our best scientific minds are too career conscious to risk their professional reputations.

I don’t have a professional reputation to protect, so here is my guess as to whether Oumuamua and/or the possible Proxima Centauri radio signal are actual evidence of advanced extraterrestrial life: A solid 5 percent probability.

The chance that we’ve overlooked other confirmatory evidence already captured by our scientists? A much higher chance…say, a plucky 20 percent probability.

Turning the question around, given our time on Earth, our technology, and the amount of time we’ve been broadcasting towards the stars, what are the chances an alien civilization living nearby would detect our civilization? Probably a rather good chance, but not in the way they did in the 1997 movie Contact. There is no way the 1936 Berlin Olympics broadcast would be detectable and recoverable even a few light-years away from Earth. Instead, aliens are more likely to see evidence of life in the composition of our atmosphere.

And what is my estimated probability that advanced extraterrestrial life (of the space-traveling kind) lives in our tiny corner of the Milky Way — say, within 50 light years of our sun? Given there are at least 133 Sun-like stars within this distance (many with planets in the organic life-friendly Goldilocks- zone) and probably 1,000 more planetary systems orbiting red dwarf stars, I give it an optimistic 90 percent chance that intelligent life lives nearby.

We are not likely to be alone. In fact, we probably don’t have the biggest house or the smartest kids in our own cul-de-sac. We are probably average.

I am even more convinced that we live in a galaxy densely-populated with life on every point of the advancement scale, a galactic menagerie of life that has more in common with Gene Roddenberry’s Star Trek than Dr. Kippling’s 3:2 odds estimate against such intelligent life abundance.

So, it won’t surprise me if someday we learn that aliens in the Proxima Centauri system were trying to contact us or that Oumuamua was a reconnaissance mission of our solar system by aliens looking for a hospitable place to explore (and perhaps spend holidays if the climate permits). I’m not saying that is what happened, I’m just saying I would not be surprised.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Be part of the solution, not the problem

By Kent R. Kroeger (Source: NuQum.com; January 16, 2021)

Could Donald Trump’s presidency have ended any other way?

What happened at — and, more importantly, in — the U.S. Capitol on January 6th was tragic. People died because an uncontrollable mob formed outside the U.S. Capitol to support a president who, at best, was recklessly naive about what a mass rally like that could turn into; and, at worst, deliberately ignited those flames.

If only Trump instead of me had gotten this fortune cookie and taken it to heart:

“If you win, act like you are used to it. If you lose, act like you love it.” — A fortune cookie

To my Biden-supporting readers, concerned that I am going to defend Trump’s actions leading up to the storming of the U.S. Capitol on January 6th, rest easy. I am not.

Now is not the time to discover the mental gymnastics necessary to excuse a political act — Trump’s rally to “Stop the Steal” — that a child would have realized had the potential to provoke significant violence.

To my Trump-supporting readers, already practicing levels of emotional isolation and self-censorship that can’t possibly be good for your long-term health, you will be spared any self-important, virtue-signaling lecture about the moral righteousness of Republicans “brave” enough to disown Trump or how the GOP’s many latent malignancies were exposed (and exploited) by the Trump presidency.

No, instead, I will use the January 6th debacle to share what I am telling myself so I can help make sure something like that sh*t-carnival never happens again.

For starters…

Now is NOT the time to say, ‘They started it.

For partisan purposes, I will not compare or equate last year’s George Floyd/Black Lives Matter protests in which at least 19 people died and caused approximately $1.5 billion in property damage to the Capitol riot.

Protests turning deadly are not that uncommon in U.S. history, and they’ve been instigated from both the left and right. We’ve before even seen gun violence directed at U.S. House members within the Capitol building itself (1954 Capitol shooting).

But to use the 2021 Capitol riot tragedy to propel the narrative that violence is primarily the domain of the political right is to willfully ignore instances such as the 12 people who died of lead poisoning in Flint, Michigan when a Democrat mayor, a Republican governor, and an oddly passive Environmental Protection Agency under Barack Obama carelessly switched Flint’s water supply in order to save money.

One might say that Flint is a different kind of violence and they’d be right. I think its worse. Its silent. Hard to identify its perpetrator. And even harder to secure justice and restitution.

Or how about the hundreds of mostly brown people U.S. drones and airstrikes kill every year? These military and intelligence actions, uniformly funded by bipartisan votes since the 9/11 attacks, have arguably accomplished little except make the U.S. the world’s most prolific killer of pine nut farmers in Afghanistan.

Whether we acknowledge it, deadly violence is central part of our culture and no political party, ideology, race or ethnicity is immune from being complicit in it.

Now is NOT the time to call other people conspiracy theorists — especially since we are all inclined to be one now and then.

While I emphatically oppose the overuse of mail-in voting (particularly when third parties are allowed to collect and deliver large numbers of completed ballots) on the grounds that it compromises two core principles of sound election system design — timeliness and integrity — it is regrettable that Trump and his subordinates have encouraged his voters to believe the three-headed chimera that the 2020 presidential election was stolen. The evidence simply isn’t there, as hard as they try to find it.

That said, for Democrats or anyone else to call Trump voters “conspiracy theorists” is to turn a blind eye to a four-year Democratic Party and news media project called Russiagate that, in the brutal end, found no evidence of a conspiracy between the 2016 Trump campaign and the Russians to influence the 2016 election. At this point my Democrat friends usually lean in and say something like, “The Mueller investigation found insufficient evidence to indict Trump and his associates on conspiracy charges — read the Mueller report!” At which time I lean in and say, “Read the Mueller report!” There was no evidence of a conspiracy, a term with a distinct legal definitionAn agreement between two or more people to commit an illegal act, along with an intent to achieve the agreement’s goal.

What the Mueller report did do was document: (1) the Trump campaign’s clumsy quest to find Hillary Clinton’s 30,000 deleted emails (George Papadopoulos and Roger Stone), (2) the incoming Trump administration’s opening of a dialogue with a Russian diplomat (Sergey Kislyak) using an Trump administration representative (General Michael Flynn) and (3) the Trump organization’s effort to build a Trump Tower in Moscow. All of those actions were legal — as they should be.

And, yes, I am skeptical that Lee Harvey Oswald acted alone — even as I believe he was the lone gunman. If that makes me a conspiracy theorist, so be it.

Now is NOT the time to shame people for believing that most of our political elites work more for the political donor class than the average American (whoever that is).

I do not believe the data supports the thesis that economic grievances are the primary factor behind Trump’s popularity within the Republican Party. Instead, the evidence says something deeper drives Trump support, more rooted in race, social status, and culture than economics.

Still, the stark realization that our political system is broken binds many Democrat progressives and Trump supporters and has been continually buried over the past four-plus years of anti-Trump media coverage: This country has a political-economic system primarily designed to fulfill the interests of a relatively small number of Americans.

In Democracy in America?: What Has Gone Wrong and What We Can Do About It (University of Chicago Press, 2017), perhaps themost important political science book in the past thirty years, political scientists Benjamin Page and Martin Gilens offer compelling evidence that public policy in the U.S. is best explained by understanding the interests of elites and not those of the average American. In fact, this disconnect is so bad in their view, it is fair to ask if Americans even live in a democracy.

“Our analysis of some 2,000 federal government policy decisions indicates that when you take account of what affluent Americans, corporations and organized interest groups want, ordinary citizens have little or no independent influence at all,” Page and Gilens said in a Washington Post interview while promoting their book. “The wealthy, corporations and organized interest groups have substantial influence. But the estimated influence of the public is statistically indistinguishable from zero.”

“This has real consequences. Millions of Americans are denied government help with jobs, incomes, health care or retirement pensions. They do not get action against climate change or stricter regulation of the financial sector or a tax system that asks the wealthy to pay a fair share. On all these issues, wealthy Americans tend to want very different things than average Americans do. And the wealthy usually win.”

And while Page and Gilen’s research rightfully has methodological detractors, the most direct statistical indicator of its validity — wealth inequality —has been growing steadily in the U.S. since 1990, with a few temporary pauses during the Clinton administration, the 2008 worldwide financial crisis, and the Trump administration (yes, you read that right).

Image for post
Source: St. Louis Federal Reserve

Only the disproportionate amount of the coronavirus pandemic relief money going to corporate bank accounts has put the wealthiest 1-percent back near their Obama administration highs.

So while Trump supporters don’t always marshal the best evidence-based critiques of the American political system, with a little more effort and the help of better leaders it wouldn’t be hard for them to do so.

Now is NOT the time to reduce three-fifths of our population down to words like ‘fascist’ and ‘racist.’

Are there racist Republicans? Of course there are — around 45 percent among white Republican voters, according to my analysis of the 2018 American National Election Study (Pilot). That same analysis, which used a measure of racial bias common in social science literature, found 20 percent of white Democrat voters have a more favorable view of their race relative to African-Americans and/or Hispanics. Any assumption that racism is unique or in a more toxic form among Trump supporters is challenged by the evidence.

Now IS the time for cooler heads to prevail, which eliminates almost anyone appearing on the major cable news networks in the past two weeks.

The national news media profits from the use of exaggeration and hyperbole. That can never be discounted when talking about events such as what happened January 6th.

Here is how Google searches on the term ‘coup d’état’ was affected by the Capitol riot:

Image for post
Source: Google Trends

I confess I was not horrified watching live on social media as Trump supporters forced their way into the Capitol. I was shocked, but not horrified. A small semantic difference, but an important one. At no point did I think I was watching an ongoing coup d’état.

But for my family and friends that watched the mob unfold on the major cable news networks, they thought an actual coup d’état was in motion — that this mob was viably attempting to stop the electoral college vote, overturn the 2020 election, and keep Trump in the presidency.

Where the news media has an obligation to discern fact from fantasy, they did the exact opposite on January 16th. They, in fact, helped fan the spread of disinformation coming out of news reports from inside the Capitol.

As disconcerting as the scene was on January 6th, there is a chasm-sized difference between Facebook chuckle heads causing a deadly riot and a credible attempt to take over the U.S. government.

This is how journalist Michael Tracey described the Capitol riot and the media’s predilection for hyperbole while reporting on it:

“Is it unusual for a mob to breach the Capitol Building — ransacking offices, taking goofy selfies, and disrupting the proceedings of Congress for a few hours? Yes, that’s unusual. But the idea that this was a real attempt at a “coup” — meaning an attempt to seize by force the reins of the most powerful state in world history — is so preposterous that you really have to be a special kind of deluded in order to believe it. Or if not deluded, you have to believe that using such terminology serves some other political purpose. Such as, perhaps, imposing even more stringent censorship on social media, where the “coup” is reported to have been organized. Or inflicting punishment on the man who is accused of “inciting” the coup, which you’ve spent four years desperately craving to do anyway.

Journalists and pundits, glorying in their natural state — which is to peddle as much free-flowing hysteria as possible — eagerly invoke all the same rhetoric that they’d abhor in other circumstances on civil libertarian grounds. “Domestic terrorism,” “insurrection,” and other such terms now being promoted by the corporate media will nicely advance the upcoming project of “making sure something like this never happens again.” Use your imagination as to what kind of remedial measures that will entail.

Trump’s promotion of election fraud fantasies has been a disaster not just for him, but for his “movement” — such as it exists — and it’s obvious that a large segment of the population actively wants to be deceived about such matters. But the notion that Trump has “incited” a violent insurrection is laughable. His speech Monday afternoon that preceded the march to the Capitol was another standard-fare Trump grievance fest, except without the humor that used to make them kind of entertaining.”

This is not a semantic debate. What happened on January 6th was not a credible coup attempt, despite verbal goading from a large number of the mob suggesting as much and notwithstanding Senator Ted Cruz’ poorly-timed fundraising tweet that some construed (falsely) as his attempt to lead the nascent rebellion.

Still, do not confuse my words with an exoneration of Trump’s role in the Capitol riot. To the contrary, time and contemplation has led to me to conclude Trump is wholly responsible for the deadly acts conducted (literally) under banner’s displaying his name, regardless of the fact his speech on that morning did not directly call for a violent insurrection. In truth, he explicitly said the opposite: “I know that everyone here will soon be marching over to the Capitol building to peacefully and patriotically make your voices heard.”

Nonetheless, he had to know the potential was there and it was his job to lead at that moment. He didn’t.

Now IS the time to encourage more dialogue, not less — and that means fewer “Hitler” and “Communist” references (my subsequent references notwithstanding).

Along with Page and Gilen’s book on our democracy’s policy dysfunction, another influential book for me has been Yale historian Timothy Snyder’s On Tyranny: Twenty Lessons from the Twentieth Century (Tim Duggan Books, 2017). In it he uses historical examples to explain how governments use tragedies and crises to increase their control over society (and not usually for the common good).

For example, weeks after Adolf Hitler was made Chancellor of Germany, he used the Reichstag fire on February 27, 1933, to issue The Reichstag Fire Decree which suspended most civil liberties in Germany, including freedom of the press and the right of public assembly.

“A week later, the Nazi party, having claimed that the fire was the beginning of a major terror campaign by the Left, won a decisive victory in parliamentary elections,” says Snyder. “The Reichstag fire shows how quickly a modern republic can be transformed into an authoritarian regime. There is nothing new, to be sure, in the politics of exception.”

It would be reductio ad absurdum to use Hitler’s shutting down of Communist newspapers as the forewarning to a future U.S. dictatorship caused by Twitter banning Trump. Our democracy can survive Trump’s Twitter ban. At the same time, our democracy isn’t stronger for it.

Conservative voices are now systematically targeted for censorship, as described in journalist Glenn Greenwald’s (not a conservative) recent Twitter salvo:

Final Thoughts

Today, because of what happened on January 6th, the U.S. is not as free as it was even a month ago, and it is fruitless to blame one person, a group of people, the news media or a political party for this outcome. We have all contributed in a tiny way by isolating ourselves in self-selected information bubbles that keep us as far away as humanly possible from challenging and unpleasant thoughts. [For example, I spend 99 percent of my social media time watching Nerdrotic and Doomcock torch Disney, CBS and the BBC for destroying my favorite science fiction franchises: Star Wars, Star Trek and Doctor Who.]

A few days ago I chatted with a neighbor who continues to keep his badly dog-eared, F-150-sized Trump sign in his front yard. He talked weather, sports, and movies. Not a word on politics. I wanted to, but knew not to push it. If he had mentioned the current political situation, I would have offered this observation:

Political parties on the rise always overplay their hand. How else can you explain how the Democrats, facing an historically unpopular incumbent president — during a deep, pandemic-caused recession— could still lose seats in U.S. House elections? Republicans are one midterm election away from regaining the House of Representatives and the two years until the next congressional election is a political eternity.

The Republicans will learn from the 2021 Capitol riot.

As for the Democrats, I would just suggest this fortune cookie wisdom:

Image for post

Actually, that is wisdom for all of us.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The status quo is back — expect them to cry about the budget deficit

By Kent R. Kroeger (January 21, 2021)

Political scientist Harold Lasswell (1902–1978) said politics is about ‘who gets what, when and how.’

He wrote it in 1936, but his words are more relevant than ever.

In the U.S., his definition is actualized in Article I, Section 8 of the U.S. Constitution:

The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defense and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;

To borrow Money on the credit of the United States;

To regulate Commerce with foreign Nations, and among the several states, and with the Indian Tribes;

To establish an uniform Rule of Naturalization, and uniform Laws on the subject of Bankruptcies throughout the United States;

To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures.

In short, the U.S. Congress has the authority to create money — which they’ve done in ex cathedra abundance in the post-World War II era.

According to the U.S. Federal Reserve, the U.S. total public debt is 127 percent of gross domestic product (or roughly $27 trillion) — a level unseen in U.S. history (see Figure 1).

Figure 1: Total U.S. public debt as a percent of gross domestic product (GDP)

Image for post
Source: St. Louis Federal Reserve

And who owns most of the U.S. debt? Not China. Not Germany. Not Japan. Not the U.K. It is Americans who own roughly 70 percent of the U.S. federal debt.

Its like owing money to your family — and if you’ve ever had that weight hanging over your head, you might prefer owing the money to the Chinese.

When it comes to dishing out goodies, the U.S. Congress makes Santa Claus look like a hack.

But, unlike Saint Nick, Congress doesn’t print and give money to just anyone who’s been good— Congress plays favorites. About 70 percent goes to mandatory spending, composed of interest payments on the debt (10%), Social Security (23%), Medicare/Medicaid (23%), and other social programs (14%). As for the other 30 percent of government spending, called discretionary spending, 51 percent goes to the Department of Defense.

That leaves about three trillion dollars annually to allocate for the remaining discretionary expenditures. To that end, the Congress could just hand each of us (including children) $9,000, but that is crazy talk. Instead, we have federal spending targeted towards education, training, transportation, veteran benefits, health, income security, and the basic maintenance of government.

There was a time when three trillion dollars was a lot of money — and maybe it still is — but it is amazing how quickly that amount of money can be spent with the drop of a House gavel and a presidential signature.

The Coronavirus Aid, Relief, and Economic Security Actpassed by the U.S. Congress and signed into law by President Donald Trump on March 27, 2020, costed out at $2.2 trillion, with about $560 billion going to individual Americans and the remainder to businesses and state or local governments.

That is a lot of money…all of it debt-financed. And the largest share of it went directly to the bank accounts of corporate America.

And what do traditional economists tell us about the potential impact of this new (and old) federal debt? Their collective warning goes something like this:

U.S. deficits are partially financed through the sale of government securities (such as T-bonds) to individuals, businesses and other governments. The practical impact is that this money is drawn from financial reserves that could have been used for business investment, thereby reducing the potential capital stock in the economy.

Furthermore, due to their reputation as safe investments, the sale of government securities can impact interest rates when they force other types of financial assets to pay interest rates high enough to attract investors away from government securities.

Finally, the Federal Reserve can inject money into the economy either by directly printing money or through central bank purchases of government bonds, such as the quantitative easing (QE) policies implemented in response to the 2008 worldwide financial crisis. The economic danger in these cases, according to economists, is inflation (i.e., too much money chasing too few goods).

How does reality match with economic theory?

I am not an economist and don’t pretend to have mastered all of the quantitative literature surrounding the relationship between federal debt, inflation and interest rates, but here is what the raw data tells me: If there is a relationship, it is far from obvious (see Figure 2).

Figure 2: The Relationship between Federal Debt, Inflation and Interest Rates

Image for post
Source: St. Louis Federal Reserve

Despite a growing federal debt, which has gone from just 35 percent of GDP in the mid-1970s to over 100 percent of GDP following the 2008 worldwide financial crisis (blue line), interest rates and annual inflation rates have fallen over that same period. Unless there is a 30-year lag, there is no clear long-term relationship between federal deficits and interest rates or inflation. If anything, the post-World War II relationship has been negative.

Given mainstream economic theory, how is that possible?

The possible explanations are varied and complex, but among the reasons for continued low inflation and interest rates, despite large and ongoing federal deficits, is an abundant labor supply, premature monetary tightening by the Federal Reserve (keeping the U.S. below full employment), globalization, and technological (productivity) advances.

Nonetheless, the longer interest rates and inflation stay subdued amidst a fast growing federal debt, it becomes increasingly likely heterodox macroeconomic theories — such as Modern Monetary Theory (MMT) — will grow in popularity among economists. At some point, consensus economic theory must catch up to the facts on the ground.

What is MMT?

Investopedia’s Deborah D’Souza offers a concise explanation:

Modern Monetary Theory says monetarily sovereign countries like the U.S., U.K., Japan, and Canada, which spend, tax, and borrow in a fiat currency they fully control, are not operationally constrained by revenues when it comes to federal government spending.

Put simply, such governments do not rely on taxes or borrowing for spending since they can print as much as they need and are the monopoly issuers of the currency. Since their budgets aren’t like a regular household’s, their policies should not be shaped by fears of rising national debt.

MMT challenges conventional beliefs about the way the government interacts with the economy, the nature of money, the use of taxes, and the significance of budget deficits. These beliefs, MMT advocates say, are a hangover from the gold standard era and are no longer accurate, useful, or necessary.

More importantly, these old Keynesian arguments — empirically tenuous, in my opinion — needlessly restrict the range of policy ideas considered to address national problems such as universal access to health care, growing student debt and climate change. [Thank God we didn’t get overly worried about the federal debt when we were fighting the Axis in World War II!]

Progressive New York congresswoman Alexandria Ocasio-Cortez has consistently shown an understanding of MMT’s key tenets. When asked by CNN’s Chris Cuomo how she would pay for the social programs she wants to pass, her answer was simple (and I paraphrase): The federal government can pay for Medicare-for-All, student debt forgiveness, and the Green New Deal the same way it pays for a nearly trillion dollar annual defense budgetjust print the money.

In fact, that is essentially what this country has done since President Lyndon Johnson decided to prosecute a war in Southeast Asia at the same time he launched the largest set of new social programs since the New Deal.

Such assertions, however, generate scorn from status quo-anchored political and media elites, who are now telling the incoming Biden administration that the money isn’t there to offer Americans the $2,000 coronavirus relief checks promised by Joe Biden as recently as January 14th. [I’ll bet the farm I don’t own that these $2,000 relief checks will never happen.]

Cue the journalistic beacon of the economic status quo — The Wall Street Journal — which plastered this headline above the front page fold in its January 19th edition: Janet Yellen’s Debt Burden: $21.6 Trillion and Growing

WSJ writers Kate Davidson and Jon Hilsenrath correctly point out that the incoming U.S. Treasury secretary, Yellen, was the Chairwoman of the Clinton administration’s White House Council of Economic Advisers and among its most prominent budget deficit hawks, and offer this warning: “The Biden administration will now contend with progressives who want even more spending, and conservatives who say the government is tempting fate by adding to its swollen balance sheet.”

This misrepresentation of the federal debt’s true nature is precisely what MMT advocates are trying to fight, who note that when Congress spends money, the U.S. Treasury creates a debit from its operating account (through the Federal Reserve) and deposits this Congress-sanctioned new money into private bank accounts and the commercial banking sector. In other words, the federal debt boosts private savings — which, according to MMT advocates, is a good thing when the “debt” addresses any slack (i.e., unused economic resources) in the economy.

Regardless of MMT’s validity, this heterodox theory reminds us of how poorly mainstream economic thinking describes the relationship between federal spending and the economy. From what I’ve seen after 40 years of watching politicians warn about the impending ‘economic meltdown’ caused by our growing national debt, consensus economic theory seems more a tool for politicians to scold each other (and their constituents) about the importance of the government paying its bills than it is a genuine way to understand how the U.S. economy works.

Yet, I think everyone can agree on this: Money doesn’t grow on trees, it grows on Capitol Hill. And as the U.S. total public debt has grown, so have the U.S. economy and wealth inequality — which are intricately interconnected through, as Lasswell described 85 years ago, a Congress (and president) who decide ‘who gets what, when and how.’

  • K.R.K.

Send comments and your economic theories to: nuqum@protonmail.com

Beadle (the Data Crunching Robot) Predicts the NFL Playoffs

By Kent R. Kroeger (Source: NuQum.com; January 15, 2021)

Beadle (the Data Crunching Robot); Photo by Hello Robotics (Used under the Creative Commons Attribution-Share Alike 4.0 International license)

Since we are a mere 24 hours away from the start of the NFL Divisional Round playoffs, I will dispense with any long-winded explanation of how my data loving robot (Beadle) came up with her predictions for those games.

Suffice it to say, despite her Bayesian roots, Beadle is rather lazy statistician who typically eschews the rigors and challenges associated with building statistical models from scratch for the convenience of cribbing off the work of others.

Why do all that work when you can have others do it for you?

There is no better arena to award Beadle’s sluggardness than predicting NFL football games, as there are literally hundreds of statisticians, data modelers and highly-motivated gamblers who publicly share their methodologies and resultant game predictions for all to see.

Why reinvent the wheel?

With this frame-of-mind, Beadle has all season long been scanning the Web for these game predictions and quietly noting those data analysts with the best prediction track records. Oh, heck, who am I kidding? Beadle stopped doing that about four weeks into the season.

What was the point? It was obvious from the beginning that all, not most, but ALL of these prediction models use mostly the same variables and statistical modeling techniques and, voilà, come up with mostly the same predictions.

FiveThirtyEight’s prediction model predicted back in September that the Kansas City Chiefs would win this year’s Super Bowl over the New Orleans Saints. And so did about 538 other prediction models.

Why? Because they are all using the same data inputs and whatever variation in methods they employ to crunch that data (e.g., Bayesians versus Frequentists) is not different enough to substantively change model predictions.

But what if the Chiefs are that good? Shouldn’t the models reflect that reality?

And it can never be forgotten that these NFL prediction models face a highly dynamic environment where quarterbacks and other key players can get injured over the course of a season, fundamentally changing a team prospects — a fact FiveThirtyEight’s model accounts for with respect to QBs — and the reason for which preseason model predictions (and Vegas betting lines) need to be updated from week-to-week.

Beadle and I are not negative towards statistical prediction models. To the contrary, given the infinitely complex contexts in which they are asked to make judgments, we couldn’t be more in awe of the fact that many of them are very predictive.

Before I share Beadle’s predictions for the NFL Divisional Round, I should extend thanks to these eight analytic websites that shared their data and methodologies: teamrankings.com, ESPN’s Football Power Index, sagarin.com, masseyratings.com, thepowerrank.com, ff-winners.com, powerrankingsguru.com, and simmonsratings.com.

It is from these prediction models that Beadle aggregated their NFL team scores to generate her own game predictions.

Beadle’s Predictions for the NFL Divisional Playoffs

Without any further adieu, here is how Beadle ranks the remaining NFL playoff teams on her Average Power Index (API), which is merely each team’s standardized (z-score) after averaging the index scores for the eight prediction models:

Analysis by Kent R. Kroeger (NuQum.com)

And from those API values, Beadle makes the following game predictions (including point spreads and scores) through the Super Bowl:

No surprise: Beadle predicts the Kansas City Chiefs will win the Super Bowl in a close game with the New Orleans Saints.

But you didn’t need Beadle to tell you that. FiveThirtyEight.com made that similar prediction five months ago.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The data do not support the Miami Dolphins bailing on Tua Tagovailoa

[Headline photo: Two cheerleaders for the Miami Dolphins football team (Photo by Jonathan Skaines; Used under the CCA-Share Alike 2.0 Generic license.)]

First, an apology to my wife. The above photo was the one of the few Miami Dolphin-related public copyright photos I could find on short notice. It should not be regarded, however, as an endorsement of fake smiles.

Now, to the issue at hand…

Alabama’s Tua Tagovailoa was the fifth overall pick and second quarterback taken in the 2020 National Football League (NFL) draft.

Drafted by the Miami Dolphins, Tagovailoa was drafted behind Heisman winner Joe Burrow (QB — Cincinnati Bengals) and Ohio State’s Chase Young (DE — Washington Sea Dogs) and was one of four quarterbacks selected in the first round. San Diego took the third quarterback, Oregon’s Justin Herbert, as the sixth overall pick and Green Bay— mysteriously — thought Utah State’s Jordan Love, the 26th overall pick and fourth quarterback taken, was that final piece needed for the Aaron Rodgers-led Packers to win another Super Bowl (…and, even more mysteriously, Love’s clipboard-holding skills seem to be what the Cheeseheads needed this season).

Normally when an NFL team drafts a quarterback as high as fifth, they give him at least a few years to earn his first round contract. The Tampa Bay Buccaneers gave first overall pick Jameis Winston five years, as did the Tennessee Titans with second overall pick Marcus Mariota. Sam Bradford and Mark Sanchez were offered four years to prove their value to their respective teams, the St. Louis Rams and New York Jets. The oft-injured Robert Griffin III — the Washington Federals’ second pick in the 2012 draft —had three years. Even purple drank jugging rumors didn’t stop JaMarcus Russell from getting two solid years of opportunity from the Oakland Raiders.

And, keep in mind, the Dolphins’ 2012 first round pick — and current Titans quarterback — Ryan Tannehill gave the team six mediocre seasons before they jettisoned him in 2019. The Dolphins were patient with Tannehill — who has turned into a high-quality quarterback — so why not with Tagovailoa?

While being impatient with his head coaches, having had six since buying the team in 2008, Dolphins owner Stephen M. Ross, who famously said “there’s a lot of good and I believe there’s a lot of bad” regarding his friend President Donald Trump, has a low-profile personality and is not known for creating drama.

Yet, if he allows his football team’s brain trust to draft another quarterback in the first round, he will get more than drama, he will completely undercut the already fragile confidence of his current starter in Tagovailoa.

So why are a significant number of NFL draft experts seriously recommending the Dolphins use their third pick in the 2021 draft on another quarterback? Writing for ESPN, three out of seven experts said the Dolphins should use their pick on another quarterback:

Jeremy Fowler, national NFL writer: Quarterback. Key word is “address.” Miami needs to thoroughly evaluate the top quarterbacks in the draft, then weigh the pros and cons of not taking one and sticking with Tagovailoa as the unquestioned starter. Miami owes it to its fans and organization to at least do that. This is the one position where a surplus isn’t a bad thing. Keep drafting passers high if necessary. Tua might be the guy regardless. And if the Dolphins decide he’s better than Zach Wilson or Justin Fields or Trey Lance, then grab the offensive tackle or playmaking receiver Miami needs around him.

Mike Clay, fantasy football writer:: Quarterback. You don’t have to agree with me on this, but I’ve always been in the camp of “If you’re not sure you have a franchise quarterback, you don’t have a franchise quarterback.” From my perspective, we don’t know whether Tua Tagovailoa is the answer, as he didn’t look the part and was benched multiple times as a rookie. Miami’s future looks bright after a 10-win season in Brian Flores’ second campaign, so it’s unlikely this franchise will be picking in the top five again anytime soon. If they aren’t convinced Tua is the franchise quarterback, they need to avoid sunk-cost fallacy and a trip to long-term quarterback purgatory.

Seth Walder: Quarterback. Tagovailoa still might pan out, but quarterback is too important for Miami to put all of its eggs in that basket, especially after he finished 26th in QBR and clearly did not earn complete trust from the coaching staff. Take a shot at whichever of the top three quarterbacks is left on the board while keeping Tagovailoa, at least for now. That way, Miami can maximize its chances of finding its franchise QB.

And the question must be asked, why? Has Tagovailoa grossly under-performed? If Miami drafts another quarterback just a year after getting Tagovailoa, the only conclusion one can make is that the Dolphins consider him a bust, but with only a year under his belt is that even possible to know?

Before assessing Tagovailoa’s performance in his rookie season, we should consider the possible comparisons. The first comparison is the most obvious: compare Tagovailoa to other quarterback’s first significant playing year (which I define as a quarterback’s first year with at least three starts and 50 or more pass attempts — admittedly, this is a low threshold).

Also, for comparability sake, I’ve decided here to only compare quarterbacks drafted in the first round since 2005, the year in which www.pro-football-reference.com starts computing ESPN’s Total QBR Index (QBR) for quarterbacks. While other quarterback metrics have been posited as better measures of quarterback quality — passer rating, adjusted net yards per pass attempt — none are perfect as they don’t directly account for the style of a team’s offense, the quality of a team’s personnel, and the quality of the defense, all of which play a significant role in how a quarterback plays. In the end, I went with the statistic that best predicts wins: ESPN’S QBR.

[I should add that while the QBR does not consider the strength-of-schedule (SoS) faced by a quarterback, it is easily computed and nicely demonstrated in a past analysis by Chase Stuart on footballperspective.com. In a follow-up to this essay, I will incorporate SoS information into player performance metrics for the 2020 season.]

The second comparison is Tagovailoa’s from game-to-game. Did he improve? And the final comparison is the value of the QBR itself. By design, ESPN’s QBR is an approximate objective standard by which to judge quarterbacks: QBR’s exceeding 50 represent above-average quarterbacks when compared to all quarterbacks since 2006.

I will dispense with the last comparison first: Tagovailoa’s rookie year QBR, based on nine starts, 290 pass attempts, a 64.1 percent completion rate and 11 touchdown passes against five interceptions is an above-average 52.9 (which puts him at 26th out of 35 quarterbacks for whom the QBR was computed).

Well, on this comparison at least, Tagovailoa does not stand out in a positive way. But perhaps his performance improved over the season? Hard to say. His first start in Week 8 against the Los Angeles Rams — the NFL’s best passing defense — led to a 29.3 QBR, and over his next eight starts he achieved QBRs over 60 against the Arizona Cardinals (Week 9, QBR 87.3), the Los Angeles Chargers (Week 10, QBR 66.5), the Cincinnati Bengals (Week 13, QBR 74.5) and the Las Vegas Raiders (Week 16, QBR 64.4). Conversely, he struggled against the Denver Broncos (Week 11, QBR 22.9), the Kansas City Chiefs (Week 14, QBR 30.2). and the Buffalo Bills (Week 17, QBR 23.3) — all good passing defenses.

After these first two comparisons, it is hard to decide if Tagovailoa is going to be Miami’s franchise quarterback for the future. As with almost any rookie quarterback, there are positives and negatives, and neither overwhelms the other in Tagovailoa’s case.

However, in our final comparison, I believe Tagovailoa has more than proven it is far too soon for the Dolphins to spend a Top 3 draft choice on another quarterback.

First, we should look at the season-to-season QBRs of quarterbacks who are arguably “franchise” quarterbacks and who were picked in the first round (see Figure 1 below). And if you don’t consider Kyler Murray, Ryan Tannehill, Baker Mayfield or Jared Goff franchise quarterbacks, check in with me in a couple of years. All four are currently in a good, mid-career trajectory by historical standards.

Figure 1: Season-to-Season QBRs for NFL “franchise” Quarterbacks Selected in the 1st Round since 2005

Image for post
Data Source: www.pro-football-reference.com

Three things jump out to me from Figure 1: (1) Franchise quarterbacks rarely have seasons with dismal overall QBRs (<40), (2) Aaron Rodgers really is that great, and (3) Patrick Mahomes, still early in his career, is already in the QBR stratosphere (…and he almost has nowhere to go but down).

How does Tagovailoa compare to my selection of franchise quarterbacks and non-franchise quarterbacks, as well as the other quarterbacks in the 2020 first round draft class (Joe Burrow and Justin Herbert)? As it turns out, pretty good (see Figure 2).

As for the non-franchise quarterbacks, my most controversial assignments are Cam Newton and Joe Flacco. I’m welcome to counter-arguments, but their inclusion in either group does not change the basic conclusion from Figure 2 with respect to Tagovailoa.

Figure 2: Season-to-Season QBRs for NFL “franchise” & “non-franchise” Quarterbacks Selected in the 1st Round since 2005

Image for post
Data Source: www.pro-football-reference.com

In comparison to the other quarterbacks and their first substantive year in the NFL, Tagovailoa’s 2020 QBR is slightly below the average for franchise quarterbacks (52.9 versus 54.6, respectively), and is significantly higher than for non-franchise quarterbacks (52.9 versus 46.1, respectively).

Among his 2020 draft peers, Tagovailoa’s QBR is comparable to Burrow’s (who missed six games due to a season-ending injury), but a far cry from Herbert’s (QBR = 69.7), who is already showing clear signs of super stardom ahead.

Experts are happy to debate whether Tagovailoa has the ability to “throw guys open,” or whether the level of receiver talent he had at Alabama masked his deficiencies. He may well never be a franchise quarterback by any common understanding of the category.

But given his performance in his rookie campaign and how it compares to other quarterbacks, it is unfathomable to me that the Dolphins could entertain even the slightest thought of drafting a quarterback in the 2021 draft. I hope they are not and it is merely some ESPN talking heads with that wild hair up their asses.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Why opinion journalists are sometimes bad at their job (including myself)

[Headline graphic by Dan Murrell; Data source: RottenTomatoes.com]

By Kent R. Kroeger (Source: NuQum.com; January 4, 2020)

Opinion journalists, such as movie critics, bring biases to every opinion they hold and complete objectivity is an ideal few, if any, attain.

The scientific literature on this trait common to all humans, not just opinion journalists, is vast and well-established. The lenses through which we interact with the world are multilayered and varied, each of us with our own unique configuration.

The science tells us we tend to overestimate our own knowledge while underestimating the knowledge of others (“Lake Wobegon effect“); we tend to believe an idea that has been repeated to us multiple times or is easy to understand, regardless of its actual veracity (“illusory truth effect“); we overestimate the importance recent information over historic information (“recency effect“); we offer our opinions to others that will be viewed more favorably by them and often suppress our unpopular opinions (“social desirability bias“); and perhaps the most dangerous bias of all: confirmation bias — our inclination to search for, process and remember information that confirms our preconceptions to the exclusion of information that might challenge them.

But nowhere are  human biases more socially destructive than when opinion journalists project onto others the motivations for their personal opinions and actions. It is often called the illusion of transparency and it occurs when we overestimate our own ability to understand what drives someone else’s opinions and behaviors. [The other side of that same bias occurs when we overestimate the ability of others to know our own motivations.]

The illusion of transparency often leads to fundamental attribution errors in which the explanations for the opinions and behaviors of others is falsely reduced to psychological and personality-based factors (“racist,” “sexist,” “lazy,” “stupid,” etc.).

In combination with intergroup bias — which takes the illusion of transparency to the group level and causes members of a group to give preferential treatment to their own group, often leading to a group’s intellectual atrophy as they make it difficult for new ideas to be introduced into the group — this tendency to falsely infer the motives of others can create systematic, group-level misunderstandings, leading potentially to violent social conflicts.

Judge not, that ye be not judged (Matthew 7:1-3 KJV)

I know something of these biases as I engage in when I write, including in my last opinion essay about the unusual proportion of male movie critics that gave Wonder Woman 1984 (WW84) a positive review (“Are movie critics journalists?“). Though having never met one of these male movie critics, I still felt comfortable attributing their positive reviews to WW84 as a product of being handpicked by WW84’s movie studio (Warner Bros.) for early access to the movie, along with their desire to “please their editors and audience” (a presumed manifestation of the social desirability bias) and other career motives.

Was I right? I offered little evidence beyond mere conjecture as to why the few early negative reviews for WW84 came almost entirely from female movie critics (I basically said liberal men are “useless cowards“). For that I am regretful. I can do better.

Yet, I still believe there was a clear bias among some movie critics in favor of WW84 for reasons unrelated to the actual quality of the movie. How is it possible that, out of the 19 male movie critics in Rotten Tomatoes’ “Top Critics” list who reviewed WW84 in the first two days of its Dec. 15th pre-release, not one gave WW84 a bad review. Not one.

If we assume the reviews were independent of one another and that the actual quality of WW84 warranted 80 percent positive reviews (an assumption purely for argument’s sake), then the probability that we’d get 19 consecutive positive reviews from the top male movie critics is a mere 1.4 percent ( = 0.8^19). If we use WW84’s current Rotten Tomatoes score among all critics of 60 percent as our assumption, that probability goes to near zero.

I can only draw one conclusion: Early reviews by the top male critics were excessively positive for WW84.

As to why this happened, be my guest with your own theories and hypotheses. Do I think Warner Bros. paid for good WW84 reviews? That is the typical straw man argument Hollywood journalists like to use to discredit critics of entertainment journalism. I have no evidence of money changing hands between Warner Bros. and selected movie critics and  I have never suggested as much.

Do I think editors, peer pressure, and even the general public mood weigh heavily on movie critic reviews? Absolutely, yes, and scientific evidence in other social contexts suggest this is likely the case.

Which is why when I read other journalists and movie critics suggest that negative WW84 reviews are motivated by deep-rooted sexism, I cry, “Foul!”

No, critics of “Wonder Woman 1984” are not sexist

In a recent article for Forbes, movie critics and screenwriter Mark Hughes concludes that much of the criticism of WW84, especially from male critics, is motivated by nothing less than sexism. He writes:

Questions of the film’s tone and action sequences are frankly of little interest to me, since most of the same folks offering up those complaints were eager to praise the silliness of many other superhero films. One day it’s “these films take themselves too seriously,” and the next it’s “this film is silly and should take itself more seriously.” Wash, rinse, repeat as necessary (or as clicks and payday necessitate).

Likewise, when men helm films we see far more willingness to weigh “that which works” as more important than “that which doesn’t work,” and allow them room to come back later and impress us. A woman, though? Not so much, as Patty Jenkins has been personally insulted and condemned by voices declaring Wonder Woman 1984 an inexcusable offense to humanity. If you think I’m being hyperbolic about the accusations hurled against the film and its defenders, go look around social media and press coverage for 30 seconds, and then come back to finish this article…

In other words, according to Hughes, we don’t have to be conscious of our deeply ingrained, latent sexism to be subject to its power. Merely disliking a movie directed by a woman proves its existence.

Let me start by noting that many of the male (and female) movie critics that did not like WW84, gave glowing reviews for director Patty Jenkins’ first Wonder Woman movie in 2017.  Chris Stuckmann is as good an example as any in the flaw of Hughes’ sexism charge: Stuckmann’s 2017 Wonder Woman review. His WW84 review.

Did Stuckmann’s latent sexism only kick-in after 2017? Of course not. The more likely explanation is that Stuckmann realizes Wonder Woman (2017) is a very good movie and WW84 is not.

But since Hughes is carelessly willing to suggest critics like Stuckmann are driven by subconscious sexist tendencies when they review movies by female directors, let me conjecture that Hughes had a much more powerful motivation for giving WW84 a good review.

Hughes is a screenwriter (as well as being a movie critic) and one of the well-known attributes of Hollywood culture is that directors, writers, and actors do not publicly like to piss on someone else’s work. It can be career suicide, particularly when that person directed one of the best movies of 2017 (Wonder Woman) and is widely admired within the industry. Even if sexism is alive and well in Hollywood (and I have no doubt that it is), by virtue alone of having helmed two great movies in her young career — Monster (2003) and Wonder Woman (2017) — Jenkins possesses real power by any Hollywood standard.

That Hughes liked WW84 is not surprising. I would have been stunned if Hughes hadn’t.

My complaint about Hughes’ recent Forbes article chastising the “harsher” critics of WW84 is not that Hughes thought WW84 was a good film. That Hughes appreciated the positive themes in WW84 enough to overlook the movie’s obvious flaws is truly OK. [My family, myself notwithstanding, loved the movie.] I’ve loved many movies that, objectively, were rather bad (Nicolas Cage in The Wicker Man comes to mind).

My problem with Hughes (and, unfortunately, far too many writers and journalists at present) is that he throws around psychological theories and personal accusations without a shred of empirical evidence.

Hughes doesn’t know the motivations for why someone writes a critical review any more than I do.

But Hughes takes it one step farther. He implies there’s a dark, antisocial aspect to someone who doesn’t like WW84. He asks: “Do you look at the world around you and decide we need LESS storytelling that appeals to our idealism and posits a world in which grace and mercy are transformative, in which people can look at the truth and make a choice in that moment to try to be better?”

No, Mr. Hughes, I do not think we need LESS storytelling that appeals to our idealism and better angels. But I believe we need MORE GOOD storytelling that does that. Unfortunately,  in my opinion, WW84  does not meet that standard. Furthermore, when Hollywood and our entertainment industry does it poorly, I fear it risks generating higher levels of cynicism towards the very ideals you (and I) endorse.

As one of my government bosses once said as he scolded me, “Kent, good intentions don’t matter. I want results.”

I think that dictum applies to Hollywood movies too.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

Are movie critics journalists? Reviews for “Wonder Woman 1984” suggest many are not.

[Headline photo: Gal Gadot speaking at the 2016 San Diego Comic Con International, in San Diego, California (Photo by Gage Skidmore; used under CCA-Share Alike 2.0 Generic license.]

By Kent R. Kroeger (Source: NuQum.com; December 31, 2020)

A friend of mine from graduate school — whose opinions I trusted, particularly when it came to movies and popular culture (for example, he introduced me to South Park)— shocked me one day when he told me he hated The Godfather.

How can someone who loves movies hate The Godfather?! How could someone so well-informed — he is today a recognized expert in the role and social importance of myth-making —be so utterly wrong?

The answer is quite simple: Danny prided himself on being a critic and he had a genuine problem with The Godfather, particularly the acting and dramatic pacing. [A similarly harsh critique of The Godfather was written in 1972 by Stanley Kauffmann of The New Republic.]

The reality is, thoughtful people can have dramatic differences in opinion, especially when it comes to things as subjective as movies and entertainment. [I love Monty Python and my Stanford PhD wife thinks they are moronic. Both opinions can be correct.]

Still, I’m convinced if you put 100 well-educated movie critics in a room to discuss The Godfather, 95 of them would say the movie is an American classic, and most would probably put one or both of the first two Godfather movies in the Top 20 of all time. The ‘wisdom of the crowd’ represents something real and cannot be ignored.

At the same time, those five Godfather-dismissing critics are no less real and their opinions are no less meritorious — assuming they aren’t pursuing an agenda unrelated to judging the quality of The Godfather.

But that is the problem I fear contaminates too many movie reviews today. Movie critics, by training and platform, are ‘opinion journalists.’ As such, they filter their opinions through a desire to please (impress) an immediate social circle (and bosses), as well as an influence from the mood of the times. We all do that, as it is only human.

But good journalists, including movie critics, fight that tendency — or, at least, I believe they should make the attempt.

In the case of movie criticism, to not do so risks compromising the value of the critiques. At best, it renders the criticism worthless, and at worst, malevolent.

It is fair to ask at this point, what the hell am I talking about?

I am not going to review Wonder Woman 1984 (WW84) here. I enjoy reading movie criticisms, but I don’t enjoy writing them. However, if I did review WW84, it might sound something like this review by Alteori:

As much as I thought Gal Gadot raised her game in WW84, I didn’t think anybody else did. But my overall reaction to WW84 was driven, in part, by what I did before I even saw the movie.

My first mistake (besides grudgingly subscribing to HBOMax — whose horrible, wretched parent company I once worked at for a short time) was to read one of the embargo-period reviews. Those are reviews from movie critics pre-selected by Warner Bros. to see the film prior to a wider release.

Normally, for movies I am excited to see, I avoid the corporate hype and eschew the early reviews. I want my opinion to be uncorrupted by other opinions. WW84 was one of those movies because, as this blog can attest, I am a huge fan of the first Wonder Woman movie (2017), particularly Gal Gadot’s portrayal of the superhero and the way director Patty Jenkins and screenwriter Allan Heinberg avoided turning the film into a platform for some watered-down, partisan political agenda. Wonder Woman (2017) was a film made for everyone.

[By the way, as a complete digression, I don’t care when people mispronounce names, especially when it is a name outside the someone’s native language. But I don’t understand why still 9-out-of-10 movie reviewers pronounce Gal Gadot’s name wrong. It couldn’t be simpler. It is Gal (as in guys and gals) and Guh-dote (as in, ‘my grandmother dotes on me’). Here is Gal to help you with the pronunciation.]

But, for reasons unknown, I decided to read one “Top Critic” review of WW84 before seeing the film myself. I will not reveal the reviewer’s name; yet, after seeing WW84, I have no idea what movie that person saw because it wasn’t the WW84 I saw.

This is the gist of that early review (for which I paraphrase in order to protect the identity and reputation of that clearly conflicted reviewer):

Wonder Woman 1984 is the movie we’ve all been waiting for!

If I had only read the review more closely, I would have seen the red flags. Words and phrases like “largely empty spectacle,” “narratively unwieldy,” “overwrought,” “overdrawn,” and “self-indulgent” were sprinkled throughout, if only I had been open to those hints.

In fact, after reading nearly one hundred WW84 reviews in the last two weeks, I see now that movie critics will often leave a series of breadcrumb clues indicating what they really thought of the movie. At the office they may be shills for the powerful movie industry, but similar to Galen Erso’s design of the Death Star, they will plant the seed of destruction for even the most hyped Hollywood movie. In other words, they may sell their souls to keep their jobs, but they still know a crappy movie when they see one.

Maybe ‘crappy’ is too strong, but WW84 was not a good movie — not by any objective measure that I can imagine. Don’t take my word for it. Read just a few of the reviews on RottenTomatoes.com by movie critics that still put their professional integrity ahead of their party schedule: Hannah WoodheadFionnuala HalliganAngelica Jade Bastién, and Stephanie Zacharek,

It’s not a coincidence that these movie critics are all women. It is clear to me that they have been gifted a special superpower which allows them to see through Hollywood’s faux-wokeness sh*t factory. That male movie critics are too afraid to see it, much less call it out, is further proof that one of the byproducts of the #MeToo movement is that liberal men are increasingly useless in our society. They can’t even review a goddamn movie with any credibility. Why are we keeping them around? What role do they serve?

Alright. Now I’ve gone too far. The vodka martinis are kicking in. I’m going to stop before I type something that generates the FBI’s attention.

I’ll end with this: I still love Gal Gadot and if WW84 had more of her and less of everyone else in the movie, I would have enjoyed the movie more. Hell, if they filmed Gal Gadot eating a Cobb salad for two-and-a-half-hours I would have given the movie two stars out of four.

To conclude, if you get one thing from this essay, it is this: Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote…

  • K.R.K.

Send comments to: nuqum@protonmail.com

Why the Season 2 finale of ‘The Mandalorian’ matters to so many of us

[Headline graphic: The Mandalorian (Graphic by Gambo7; used under the CCA-Share Alike 4.0 Int’l license]

By Kent R. Kroeger (Source: NuQum.com; December 27, 2020)

________________________________

“As in the case of many great films, maybe all of them, we don’t keep going back for the plot.” – Martin Scorsese

“I don’t care about the subject matter; I don’t care about the acting; but I do care about the pieces of film and the photography and the soundtrack and all of the technical ingredients that made the audience scream. I feel it’s tremendously satisfying for us to be able to use the cinematic art to achieve something of a mass emotion.” – Alfred Hitchcock

________________________________

After over 55-plus years, I can count on two hands and a couple of toes the number of times I’ve cried watching a movie or TV program.

I cried when Mary Tyler Moore turned off that lights at WJM-TV.

I cried when Radar O’Reilly announced Colonel Henry Blake’s death.

I cried when the U.S. Olympic hockey team beat the Soviets in 1980.

I cried when Howard Cosell told us that John Lennon had been killed.

I cried when ET said goodbye to Elliot.

I cried when the Berlin Wall came down in 1989.

I cried when baby Jessica was pulled from a 22-foot well.

I cried when Mandy Moore’s character dies at the end of “A Walk to Remember.”

I cried when Harry Potter and his wife sent their son off to Hogwarts.

I cried when Barack Obama became our 44th president.

I cried when the 33 Chilean miners were rescued.

I cried when the Chicago Cubs won the 2016 World Series.

But I can’t remember crying harder than while watching this season’s final episode of Disney’s “The Mandalorian,” when Luke Skywalker rescues Grogu (more popularly known as ‘Baby Yoda’) from the Empire’s indefatigable, post-Return of the Jedi remnants.

Since its December 18th release on Disney+, YouTube has been flooded with “reaction” videos of Star Wars fans as they watched a CGI-version of a young Luke Skywalker (Mark Hamill) remove his hood before Grogu’s caretaker, Din Djarin (a.k.a., The Mandalorian), and offers to train Grogu in the ways of The Force.

The “reaction” videos range from the highly-staged to the very charming and personal — all are illustrative of the deep affection so many people have for the original Star Wars characters, particularly Luke Skywalker.

For me, however, it is hard to detach from this emotional, collective experience the knowledge that it never would have happened if Lucasfilm (i.e., Disney), under the leadership of Kathleen Kennedy, hadn’t completely botched the Disney sequel movies, starting with “The Force Awakens,” director J. J. Abrams’ visually stunning but soulless attempt at creating a new Star Wars myth, followed by “The Last Jedi,” director Rian Johnson’s inexplicable platform for pissing on the original Star Wars mythos, and ending with “The Rise of Skywalker,” J.J. Abrams’ failed attempt to undo Johnson’s irreparable damage (along with the desecration Abrams himself laid upon the Star Wars brand with “The Force Awakens”).

Though opinions vary among Star Wars fans as to the extent Disney has alienated its core Star Wars audience, almost all agree that Disney’s most unforgivable sin was disrespecting the character of Luke Skywalker, who had been defined during  George Lucas’ original Star Wars trilogy as an incurable optimist with an unbreakable loyalty to his family and friends (Princess Leia Organa and Han Solo).

We cried at Season 2’s end of the “The Mandalorian,” not just for the beauty of the moment, but also because of the depth of Disney and Lucasfilm’s betrayal.

Actor Mark Hamill, himself, as he promoted (!) “The Last Jedi,” perfectly described the cultural vandalism perpetrated by Kennedy, Abrams and Johnson on Luke Skywalker:

“I said to Rian (Johnson), Jedis don’t give up. I mean, even if he had a problem he would maybe take a year to try and regroup, but if he made a mistake he would try and right that wrong. So, right there we had a fundamental difference, but it’s not my story anymore, it’s somebody else’s story and Rian needed me to be a certain way to make the ending effective…This is the next generation of Star Wars, so I almost had to think of Luke as another character — maybe he’s ‘Jake Skywalker.’ He’s not my Luke Skywalker.”

That is not exactly what Johnson wanted to hear from one of his “Last Jedi” actors just as the movie was being released. But Hamill’s words spoke for many long time Star Wars fans.
In fact, many of us believe Disney and Lucasfilm’s Kennedy, with ruthless premeditation, intended to use the Disney sequel movies to malign Lucas’ Star Wars characters (with the exception of Princess Leia) in favor of the Disney-ordained Star Wars cast: Rey Palpatine, Kylo Ren (Ben Solo), Poe Dameron, and Finn.
I’m fairly confident in this prediction: Nobody 10, 20 or 30 years from now is going to care about Rey, Kylo, Poe and Finn. But I’m 99 percent sure we’ll still be talking about Luke Skywalker, if only in recalling how Disney f**ked up one of the most iconic heroes in movies history. Rey inspires no one — including young girls, who apparently were Lucasfilm’s targeted demo with the Disney sequel movies.
Had Disney trusted their own market research, they would have known the only reliable target was the tens of millions of original Star Wars fans (and their children and grandchildren), whose loyalty to Star Wars was proven when they still showed up at theaters for Disney’s three sequel movies, even after their devotion was insulted with the unnecessary diminution of the once dashing and heroic Han Solo (Harrison Ford) and, of course, Luke.
Had Disney treated their core audience with respect, Star Wars fans now might be anticipating Rey’s next cinematic adventure, instead of drowning themselves in the bittersweet giddiness of Luke’s triumphant return on “The Mandalorian.”
To be sure, a lot of Star Wars fans want to put Luke’s return in its proper perspective. We still have to accept that — under the Disney story line — Luke is destined to slump off to a remote island, drinking titty-milk from the teet of a giant alien sea cow while whining that he couldn’t stop his nephew from killing off Luke’s young Jedi pupils (including presumably Grogu).
Despite the joyousness of Luke on “The Mandalorian,” the dark cloud of Abrams and Johnson’s bad storytelling skills still looms large.
But even the biggest Disney critics are allowing themselves to enjoy what Jon Favreau and David Filoni — the creative team behind “The Mandalorian” — are doing for the fans.
One such person is Nerdrotic (Gary Buechler), the bearded crown prince of the amorphous  Fandom Menace — a term used to describe a social-media-powered subculture of disgruntled Star Wars fans who particularly aggrieved at how Lucasfilm has dismantled Star Wars canon, allegedly using the Star Wars brand to pursue a “woke” political agenda at the expense of good storytelling.
“For the first time in a long time, the majority of the fans were happy, and the question you have to ask upfront is, ‘Disney, was it really that hard to show respect to the hero of generations, Luke Skywalker?'” says Buechler. “It must have been, because it took them 8 or 9 years to do it, but when they did do it, it sent a clear message that people still want this type of storytelling, and in this specific case, they want Luke Skywalker because he is Star Wars.”
For me, Luke’s return in “The Mandalorian” is a reminder that great moments are what make movies (and TV shows) memorable, not plot or story lines. People love and remember moments.
As someone who camped out in a dirty theater alleyway in Waterloo, Iowa in the Summer of 1977 to see a movie that was then just called “Star Wars,” I am going to enjoy what Favreau and Filoni gave us on “The Mandalorian” — the moment where the Luke Skywalker we love and remember from childhood returned to Star Wars.
– K.R.K.
Send comments to: nuqum@protonmail.com

Postscript: In recent days, Lucasfilm and Disney social media operatives have been posting messages reminding us that Luke Skywalker himself, Mark Hamill, is a “fan” of the Disney sequel movies, including Rian Johnson’s “The Last Jedi.”

Perhaps that is true. But I also believe Hamill has made it clear in the past couple of days where his heart resides — with the George Lucas’ Luke Skywalker:

News media bias and why its poorly understood

By Kent R. Kroeger (Source: NuQum.com; December 21, 2020)

Few conversation starters can ruin an otherwise pleasant dinner party (or prevent you from being invited to future ones) than asking: Is the news media biased?

If you ask a Democrat, they will tell you the Fox News Channel is the problem (“They started it!” as if explaining to an elementary school teacher who threw the first punch during a playground fight). Ask a Republican and they will say Fox News is just the natural reaction to the long-standing, pervasive liberal bias of the mainstream media.

This past presidential election has poured gasoline on the two arguments.

In late October, the Media Research Council (MRC), a conservative media watchdog group, released research showing that, between July 29 and October 20, 92 percent of evaluative statements about President Trump by the Big Three evening newscasts (ABC, CBS, NBC) were negative, compared to only 34 percent for Democratic candidate Joe Biden. Apart from a few conservative-leaning news outlets, such as the Fox News Channel and The Wall Street Journal, the MRC release was ignored.

When I shared the MRC research with my wife, her reaction was probably representative of many Democrats and media members: “Why wasn’t their coverage 100 percent negative towards Trump?”

The MRC doesn’t need me to defend their research methods, except I will point out that how they measure television news tone has a long history within media research, dating back to groundbreaking research by Michael J. Robinson and Margaret A. Sheehan, summarized in their 1981 book, “Over the Wire and on TV: CBS and UPI in Campaign ‘80.

Here is MRC’s description of their news tone measurement method:

“MRC analysts reviewed every mention of President Trump and former Vice President Biden from July 29 through October 20, 2020, including weekends, on ABC’s World News Tonight, the CBS Evening News and NBC Nightly News. To determine the spin of news coverage, our analysts tallied all explicitly evaluative statements about Trump or Biden from either reporters, anchors or non-partisan sources such as experts or voters. Evaluations from partisan sources, as well as neutral statements, were not included.

As we did in 2016, we also separated personal evaluations of each candidate from statements about their prospects in the campaign horse race (i.e., standings in the polls, chances to win, etc.). While such comments can have an effect on voters (creating a bandwagon effect for those seen as winning, or demoralizing the supports of those portrayed as losing), they are not “good press” or “bad press” as understood by media scholars.”

Besides the MRC, there is another data resource on news coverage tone. It is called the Global Database of Events, Language, and Tone (GDELT) Project and was inspired by the automated event-coding work of Georgetown University’s Kalev Leetaru and political scientist Philip Schrodt (formerly of Penn State University).

The GDELT Project is described as “an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day.”

[For a description of the many datasets available through GDELT, you can go here.]

The GDELT Project’s goals are ambitious to say the least, but perhaps may shed some light on the tone of news coverage during this past presidential election.

It is worth a look-see.

For the following analysis, I queried GDELT’s Global Online News Coverage database, filtering it down to US-only daily news articles that mention either Joe Biden or Donald Trump (but not both) from January 15, 2017 to November 22, 2020.

[The APIs used to query the GDELT database are available in this article’s appendix.]

Resulting from these queries were two metrics for each candidate: The first was the daily volume of online news coverage (measured as the percent of monitored articles), and the second was the average daily tone of online news coverage.

The second metric deserves some additional explanation.

GDELT uses Google’s Natural Language API to inspect a given text and identify the prevailing emotional opinion within the text, especially to determine a writer’s attitude as positive, negative or neutral. A text with a summary score over zero indicates that it was positive in overall tone. The higher a score, the more positive the text’s tone. Similarly, negative values indicate an overall negative tone. Values near zero indicate a text that is either neutral (i.e., no clear tone) or contains mixed tones (i.e., both positive and negative emotions).

For each news article, tone is calculated at the level of the entire article, not the tone of the sentence(s) mentioning Biden or Trump, so a negative article with a positive mention of Biden or Trump will still be scored negative. Finally, online news articles that mentioned both Biden and Trump were excluded from the analysis (83% of Biden articles mentioned both candidates, while only 9% of Trump articles did). In total, 4,593 online news articles were analyzed.

The resulting time-series data set contained five variables: (1) Date, (2) Daily Volume of Biden-focused Online News Coverage, (3) Average Daily Tone of Biden-focused Online News Coverage, (4) Daily Volume of Trump-focused Online News Coverage, and (5) Average Daily Tone of Trump-focused Online News Coverage.

From this data, I computed Biden’s net advantage in online news coverage tone by multiplying, for each candidate, the day’s news volume by the average news coverage tone. Trump’s volume-weighted news coverage tone was then subtracted from Biden’s.

Figure 1 (below) shows Biden’s net advantage in online news coverage tone from January 15, 2017 (near the beginning of Trump’s presidential term) to November 22, 2020.

Figure 1: Biden’s Tone Advantage over Trump in US Online News Coverage

Image for post

According to the GDELT data, the tone of Biden-focused US online news coverage was far more positive than Trump-focused news coverage. In fact, online news coverage never favored Trump — not for one single day!

While there has been significant variation in Biden’s tone advantage since 2017— most notably since August 2020 when Biden has seen his tone index advantage decrease from 8.9 to 1.3 by late November 2020 — it is remarkable that even when the U.S. economy was booming in late 2019, well before the the coronavirus pandemic had impacted the US, Biden was enjoying a significant advantage in online news tone.

Supporting the validity of the GDELT tone data, variation in Biden’s tone advantage fluctuates predictably with known events that occurred during the 2020 campaign.

In a March 25, 2020, interview with Katie Halper, former Biden staff member Tara Reade alleged that Biden had pushed her against a wall, kissed her, put his hand under her skirt, penetrated her with his fingers, and asked, “Do you want to go somewhere else?”

Beyond this allegation, there is only circumstantial evidence supporting Reade’s charge against Biden. Still, the impact of this allegation manifests itself in how Biden’s tonality advantage varied over time.

On March 25, 2020, Biden enjoyed a 7.7 tonality advantage over Trump. That advantage, however, immediately fell in the weeks following Halper’s Reade interview, reaching a relative low of 5.4 on May 11th.

Soon after, Biden’s tonality advantage began to recover rapidly, likely due to two major news stories in May. The first, on May 8th, marked the release of U.S. unemployment data showing the highest unemployment rate (14.7%) since the Great Depression, mainly due to job losses from the COVID-19 pandemic. These new economic numbers put the Trump administration in a clear defensive position, despite the fact that similar pandemic-fueled economic declines were occurring in almost every major economy in the world.

On May 25th, the second event — the death of George Floyd while being physically immobilized by a Minneapolis police officer — sparked a national outcry against police violence against African-Americans. Whether this outrage should have been directed at Trump (as it was by many news outlets) will be a judgment left to historians. What can be said is that Biden’s tone advantage over Trump trended upwards into the summer, reaching an 2020 peak of 9.0 on July 25th.

In the post-convention media environment, which included intermittent media coverage of the Hunter Biden controversy, Biden’s tone advantage declined for the remainder of the time covered in this analysis.

Admittedly, the GDELT data is imperfect in that it does not allow analysis at a sentence- or paragraph-level. Still, the finding in Figure 1 that Biden-focused news articles have been far more positive than Trump-focused news articles is consistent with the overall finding in the MRC tonal analysis of the 2020 presidential election.

Is this conclusive evidence of the news media’s anti-Trump bias? No. But it should inspire a further inquiry into this question, and to do that will require some methodological finesse. That is, it will require far more than just measuring the tone of news coverage.

In a country where President Trump’s approval rating has hovered between 40 and 46 percent through most of his presidency, the fact that — at a critical time in the election — the network TV news programs were over 90 percent negative towards Trump offers some face validity to the anti-Trump media bias argument.

But my wife’s gut reaction to the MRC research contains a profound point: What if Trump deserved the overwhelming negative coverage? After all, is it the job of the news media to reflect public opinion? To the contrary, by definition, an objective news media should be exclusively anchored to reality, not ofttimes fickle variations in public sentiment.

Subsequently, the central problem in measuring media bias is finding a measure of objective reality by which to assess a president’s performance. Most everything we hold a president accountable for — the economy, foreign policy, personal character, etc. — is subject to interpretations and opinions that are commonly filtered by the news media through layers of oversimplifications, distortions and other perceptual biases.

Perhaps we can use a set of proxy measures? The unemployment rateGross domestic product growthStock prices. A president’s likability score. But to what extent does a president have an impact on those metrics? Far less than we may want to believe.

And now add to the equation a global pandemic for which Trump’s culpability, though widely asserted in the national news media, is highly debatable but reckless to dismiss out-of-hand.

How can the U.S. news media possibly be equipped to judge a president’s performance by any objective, unbiased standard?

It isn’t. And, frankly, it is likely the average American doesn’t require news organizations to be so equipped. Despite survey evidence from Pew Research suggesting news consumers dislike partisanship in the news media — most likely an artifact of the social desirability bias commonly found in survey-based research — recent studies also show U.S. news consumers choose their preferred news outlets through partisan and ideological lenses.

According to political scientist Dr. Regina Lawrence, associate dean of the University of Oregon’s School of Journalism and Communication and research director for the Agora Journalism Center, selection bias is consciously and unconsciously driving news consumers towards news outlets that share similar partisan and ideological points of view — and, in the process, increases our country’s political divide:

“Selective exposure is the tendency many of us have to seek out news sources that don’t fundamentally challenge what we believe about the world. We know there’s a relationship between selective exposure and the growing divide in political attitudes in this country. And that gap is clearly related to the rise of more partisan media sources.”

The implication of this dynamic on how journalists do their jobs is significant. There is little motivation across all levels of the news-gathering process — from the corporate board room down to the beat reporter — to put an absolute premium on covering news stories from an objective point of view.

Instead, journalists and media celebrities are motivated by the same psychological and economic forces as the rest of us: career advancement, prestige and money. And to succeed in the news business today, a journalist’s output must fit within the dominant (frequently partisan) narratives of his or her news organization.

In a trailblazing data-mining-based study by the Rand Corporation on how U.S. journalism has changed since the rise of cable news networks and social media, researchers found “U.S.-based journalism has gradually shifted away from objective news and offers more opinion-based content that appeals to emotion and relies heavily on argumentation and advocacy.”

And the result of this shift? Viewership, newsroom investments and profits at the three major cable news networks have significantly increased in the past two decades, at the same time that news consumers have shifted their daily news sources away from traditional media (newspapers and TV network news) towards new media outlets (online publications, news aggregators [e.g., Drudge Report], blogs, and social media). In 2019, the major U.S. media companies — which include assets and revenue streams far beyond those generated from their news operations — had a total market capitalization exceeding $930 billion.

Why then should we be surprised that today’s broadcast and print journalists are not held to a high objectivity or accuracy standard? Their news organizations are prospering for other reasons.

During the peak of the Russiagate furor, as many journalists were hiding behind anonymous government sources, few journalists and producers at CNN, MSBNC, The New York Times or Washington Post openly challenged the basic assumptions of that conspiracy theory which asserted that Trump had colluded with the Russians during the 2016 election — a charge that, in the end, proved baseless.

Apart from ABC News chief investigative Brian Ross being disciplined for his false reporting regarding Trump campaign contacts with Russia, I cannot recall a single national news reporter being similarly disciplined for bad Russiagate reporting (and there was a lot of bad Russiagate reporting).

On the other side of the coin, there are certainly conservative news outlets where effusive Trump coverage is encouraged, but those cases are in the minority compared to the rest of the mainstream media (a term I despise as I believe the average national news outlet actively restricts the range of mainstream ideas presented to the news consuming public — and, furthermore, there is nothing ‘mainstream’ about the people who populate our national news outlets).

Being from Iowa, I’ve been spoiled by the number of times I’ve met presidential candidates in person. That, however, is not how most Americans experience a presidential election.

Americans generally experience presidential elections via the media, either through direct exposure or indirect exposure through friends, family and acquaintances; consequently, this potentially gives the news media tremendous influence over election outcomes.

According to Dr. Lawrence, the most significant way the news media impacts elections is through who and what they cover (and who and what they don’t cover). “The biggest thing that drives elections is simple name recognition.”

If journalists refuse to cover a candidate, their candidacy is typically toast. But that is far from the only way the news media can influence elections. How news organizations frame an election — which drives the dominant media narratives for that election — can have a significant impact.

The most common frame is that of the horse race in which the news media — often through polling and judging the size and enthusiasm of crowds —can, in effect, tell the voting public who is leading and who has the best chance of winning.

“We know from decades of research that the mainstream media tend to see elections through the prism of competition,” according to Lawrence. “Campaigns get covered a lot like sports events, with an emphasis on who’s winning, who’s losing, who’s up, who’s down, how they are moving ahead or behind in the polls.”

There are other narratives, however, that can be equally impactful — such as narratives centered on a candidate’s character (e.g., honesty, empathy) or intellectual capacity.

Was Al Gore as stiff and humorless as often portrayed in the 2000 campaign? Was George W. Bush as intellectually lazy or privileged as implied in much of the coverage from that same campaign?

Even journalists with good intentions can distort reality when motivated to fit their stories into these premeditated story lines.

More ominous, however, is that possibility that news organizations with strong biases against a particular candidate or political party, as they can manipulate their campaign coverage in such a way that even objective facts can be framed to systematically favor the voter impressions formed for one candidate over another.

Did that happen in the 2020 presidential election? My inclination is to say yes, but I go back to the original question posed in this essay: Did Donald Trump deserve the overwhelming negative coverage he received across large segments of the national news media?

Without clearly defining and validly measuring the objective, unbiased metrics by which to answer that question, there is no possible way to give a substantive response.

  • K.R.K.

Send comments to: nuqum@protonmail.com

GDELT API for Biden:

https://api.gdeltproject.org/api/v2/summary/summary?d=web&t=summary&k=Biden+-Trump&ts=full&fsc=US&svt=zoom&stt=yes&stc=yes&sta=list&

GDELT API for Trump:

https://api.gdeltproject.org/api/v2/summary/summary?d=web&t=summary&k=Trump+-Biden&ts=full&fsc=US&svt=zoom&stt=yes&stc=yes&sta=list&

Our fascination with “The Queen’s Gambit”

[Headline photo: Judit Polgár, a chess super grandmaster and generally considered the greatest female chess player ever (Photo by Tímea Jaksa)]

By Kent R. Kroeger (Source: NuQum.com; December 8, 2020)

One the greatest joys I’ve had as a parent is teaching my children how to play chess.

And the most bittersweet moment in that effort is the day (and match) when they beat you and you didn’t let them.

I had that moment during the Thanksgiving weekend with my teenage son when, in a contest where early on I sacrificed a knight to take his queen (and maintained that advantage for the remainder of the contest), I became too aggressive and left my king vulnerable. As I realized the mistake, it was two moves too late. He pounced and mercilessly ended the match.

He didn’t brag. No teasing. Not even a firm handshake. He checkmated me, grabbed a bowl of blueberries out of the fridge, and coolly went to the family room to play Call of Duty with his friends on his Xbox.

I was left with an odd feeling, common among parents and teachers, I suspect. A feeling of immense pride, even as my ego was genuinely bruised.

That is the nature of chess — a game that is both simple and infinitely complex, and offers no prospect of luck for the casual or out-of-practice player. With every move there are only three possibilities: You can make a good decision, a bad decision, or maintain the status quo.

For this, I love and hate chess.

Saying ‘Chess has a gender problem’ is an understatement

My father taught me chess, as his father taught him.

Growing up in the 70s, I had a picture of grandmaster legend Bobby Fischer pasted on my bedroom door, whose defeat of Boris Spassky for the 1972 world chess title ranks with the U.S. hockey team’s 1980 “Miracle on Ice” as one of this country’s defining Cold War “sports” victories over the Soviet Union. That is not an exaggeration.

In part because of Fischer’s triumph, I played chess with my childhood friends more often than any other game, save perhaps basketball and touch football.

But despite chess being prominent in my youth, I have no memories of playing chess with members of the opposite sex. My mom? Bridge was her game of choice. The girls in my neighborhood and school? I cannot recall even one match with them. Granted, between the 5th and 11th grades, I didn’t interact with girls much for any reason. And when I did later in high school, by then my chess playing was mostly on hold until graduate school, except for the occasional holiday matches with my father and brothers.

Any study on the sociohistorical determinants of gender-based selection bias should consider chess an archetype of this phenomenon.

The aforementioned chess prodigy, Bobby Fischer, infamously said of women: “They’re all weak, all women. They’re stupid compared to men. They shouldn’t play chess, you know. They’re like beginners. They lose every single game against a man.”

Any pretense that grandmaster chess players must, by means of their chessboard skills, also be smart about everything else, is easily dispensed by referencing the words that would frequently come out of Fischer’s mouth when he was alive.

Radio personality Howard Stern once observed that many of the most talented musicians he’s interviewed often lack the ability (or confidence) to talk about anything else except music (and perhaps sex and drugs): “You can’t become that good at something without sacrificing your knowledge in other things.”

That may be one of the great sacrifices grandmaster chess players also make. As evidenced by his known comments on women and chess, Fischer is ill-informed on gender science. Even some contemporary chess greats, such as Garry Kasparov and Nigel Short, have uttered verbal nonsense on the topic.

Writer Katerina Bryant recently reflected on the persistent ignorance within the chess world about gender: “Many of us mistake chess players for the world’s best thinkers, but laying out a champion’s words on the table make the picture seem much more fractured. It’s a fallacy that someone can’t be both informed and ignorant.”

Why we are watching the “The Queen’s Gambit”

Over the Thanksgiving holiday, the intersection in time of losing to my son at chess and the release of Netflix’ new series, The Queen’s Gambit, seemed like an act of providence. The two events reignited my interest in chess and launched a personal investigation into the game’s persistent and overwhelming gender bias.

Of the current 80 highest-rated chess grandmasters, all are men. The highest-rated woman is China’s Hou Yifan (#82).

That ain’t random chance. Something is fundamentally wrong with how the game recruits and develops young talent. The Queen’s Gambit, a fictional account of a young American woman’s rise in the chess world during the 1950s and 60s, speaks directly to that malfunction. While the show mostly focuses on drug addiction, dependency, and emotional alienation, I believe its core appeal is in how it addresses endemic sexism — in this case, the gender bias of competitive chess.

For those that don’t know, The Queen’s Gambit is a 7-part series (currently streaming on Netflix) about Beth Harmon, an orphaned chess prodigy from Kentucky who rises to become one of the world’s greatest chess player while struggling with emotional problems and drug and alcohol dependency. Beth is a brilliant, intuitive chess player…and a total mess.

The Queen’s Gambit is more fun to watch than it sounds. My favorite scene is in Episode 1 when, as a young girl, she plays a simultaneous exhibition against an entire high school chess club and beats them — all boys, of course — easily.

The TV series not only perfectly aligns with the current political mood, its gender bias message is not heavy-handed and is easily digestible. It is fast food feminism for today’s cable news feminists.

Aside from the touching characterization of Mr. Shaibel, the janitor at Beth’s orphanage who introduces her to chess, almost every other man in The Queen’s Gambit is either sexist, a substance abuser, arrogant, or emotionally stunted. What saves The Queen’s Gambit’s cookie-cutter politics from becoming overly turgid and preachy, however, is that Beth isn’t much better, or at least not until the end. Anya Taylor-Joy, the actress who plays Beth, is brilliant throughout the series and alone makes the show’s seven hours running time worth the personal investment.

But beyond the show’s high-end acting and production values, its hard not to enjoy a television show about a goodhearted but dysfunctional protagonist (i.e., substance addicted) who must interact with other dysfunctional people in an equally dysfunctional time?

The audience gets all of that from The Queen’s Gambit, along with a thankfully minor but clunky anti-evangelical Christian, anti-anti-Communism side story.

It is the perfect formula for getting love from the critics and attracting an audience, and the reviews and ratings for The Queen’s Gambit prove the point:

(The Queen’s Gambit) is the sort of delicate prestige television that Netflix should be producing more often.” – Richard Lawson, Vanity Fair

Just as you feel a familiar dynamic forming, in which a talented woman ends up intimidating her suitors, The Queen’s Gambit swerves; it’s probably no coincidence that a story about chess thrives on confounding audience expectations.” – Judy Berman, TIME Magazine

The audiences have lined up accordingly. In its first three weeks after release (from October 23 to November 8), Nielsen estimates that The Queen’s Gambit garnered almost 3.8 billion viewing minutes in the U.S. alone.

As the streaming TV ratings methodologies are still in their relative infancy, I prefer to look at Google search trends when comparing media programs and properties. In my own research, I have found that Google searches are highly correlated with Nielsen’s streaming TV ratings (ranging between a Pearson correlation of 0.7 and 0.9 from week-to-week).

Figure 1 shows a selection of the most popular streaming TV shows in 2020 and their relative number of Google searches (in the U.S.) from July to early December [The Queen’s Gambit is the dashed black line.]. While The Queen’s Gambit hasn’t had the peak number of searches as some other shows (The Umbrella Academy, Schitt’s Creek, and The Mandalorian), it has sustained a high level of searching over a longer period of time. Over a 5-week period, only The Mandalorian has had a cumulative Google Trends Index higher than The Queen’s Gambit (1,231 versus 1,150, respectively). The next highest is The Umbrella Academy at 1,030.

Figure 1: Google Searches for Selected Streaming TV Shows in 2020

Whatever the core reason for the popularity of The Queen’s Gambit, there is no denying that the show has attracted an unprecedented mass audience for a streaming TV series.

It makes me think Netflix might consider making more than seven episodes.

My one big annoyance with “The Queen’s Gambit”

Mark Twain once said of fiction: “Truth is stranger than fiction, but it is because fiction is obliged to stick to possibilities; truth isn’t.”

While I appreciate Twain’s sentiment — fiction should not be too restrained by truth — he clearly never had to explain to his teenage son that fire-breathing dragons did not exist in the Middle Ages, despite what Game of Thrones suggests. I fear The Queen’s Gambit will launch of generation of people who think an American woman competed internationally in chess in the 1960s and thereby diminish the much more profound accomplishments of an actual female chess prodigy, Judit Polgár, arguably the greatest female chess player of all time and who famously refused to compete in women-only chess tournaments.

While her achievements would occur two decades after Beth Harmon’s fictional rise, Polgár, a 44-year-old Hungarian, fought the real battle and offers the more substantive and entertaining story, in my opinion. It would be like Hollywood making a movie about a fictional female law professor defeating institutional sexism and rising to the Supreme Court in the 1960s, when the true story of Ruth Bader Ginsburg already exists.

Judit Polgár, a chess super grandmaster, demonstrating the “look” (Photo by Stefan64)

Granted, Walter Tevis’ book upon which The Queen’s Gambit is based was published in 1983, before Polgár was competing internationally, but for someone who followed Polgár’s career, The Queen’s Gambit‘s inspirational tale rings a bit hollow.

No American woman was competing at the highest levels of chess during The Queen’s Gambit time frame of the 1950s and 60s. In stark contrast, Polgár actually did that from 1990 to 2014, achieving a peak rating of 2735 in 2005 — which put her at #8 in the world at the time and would place her at #20 in the world today.

Perhaps only chess geeks understand how rare such an accomplishment is in chess, regardless of gender.

Polgár, at the age of 12, was the youngest chess player ever, male or female, to become ranked in the World Chess Federations’s top 100 (she was ranked #55) and became a Grandmaster at 15, breaking the youngest-ever record previously held by former World Champion Bobby Fischer.

The list of current or former world champions Polgár has defeated in either rapid or classical chess is mind-blowing: Magnus Carlsen (the highest rated player of all time), Anatoly KarpovGarry KasparovVladimir KramnikBoris SpasskyVasily SmyslovVeselin TopalovViswanathan AnandRuslan PonomariovAlexander Khalifman, and Rustam Kasimdzhanov. [I’m geeking out just reading these names.]

And most amazing of all — Polgár may not be the best chess player in her family.

Still, I want to be clear: I loved the The Queen’s Gambit. I don’t sit up at 3 a.m. watching Netflix on my iPhone unless I have a good reason — such as watching Czech porn. I merely offer a piece of criticism so that some poor sap 20 years from now doesn’t think The Queen’s Gambit is based on a true story. It most certainly isn’t. It is a pure Hollywood-processed work of fiction.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

 

Did analytics ruin Major League Baseball?

[Headline graphic: Billy Beane (left) and Paul DePodesta (right). The General Manager and assistant General Manager, respectively, for the 2002 Oakland A’s and who inspired Michael Lewis’ book, “Moneyball.” (Photo by GabboT; used under the CCA-Share Alike 2.0 Generic license.)]

By Kent R. Kroeger (Source: NuQum.com; November 25, 2020)

As he announced his resignation from the Chicago Cubs as that organization’s president of baseball operations in November, Theo Epstein, considered by many the High Priest of modern baseball analytics, made this shocking admission about the current state of baseball:

“It is the greatest game in the world but there are some threats to it because of the way the game is evolving. And I take some responsibility for that because the executives like me who have spent a lot of time using analytics and other measures to try to optimize individual and team performance have unwittingly had a negative impact on the aesthetic value of the game and the entertainment value of the game. I mean, clearly, you know the strikeout rates are a bit out of control and we need to find a way to get more action in the game, get the ball in play more often, allow players to show their athleticism some more and give the fans more of what they want.”

Epstein’s comments were painful for me on two fronts. First, he was leaving the only baseball team I’ve ever loved, having helped the Cubs win the only World Series championship of my lifetime. Second, he put a dagger in the heart of every Bill James and sabermetrics devotee who, like myself, have spent countless hours pouring through the statistical abstracts for Major League Baseball (MLB) and the National Football League on a quest to build the perfect Rotisserie league baseball team and fantasy football roster.

There is no better feeling than the long search and discovery for those two or three “value” players who nobody else thinks about and who can turn your Rotisserie or fantasy team into league champs.

In a direct way, sports analytics are the intellectual steroids for a generation of sports fans (slash) data geeks who love games they never played beyond high school, if even then.

Epstein’s departure was not entirely a surprise. The Cubs have not come close to their glorious World Series triumph in 2016—though it has to pin that on Epstein. The Cubs still have (when healthy) one of the most talented rosters in baseball. Instead, the surprise was Epstein’s targeting ‘analytics’ has one of the causes of baseball’s arguable decline.

Like many baseball fans, I’ve assumed baseball analytics—immortalized in Michael Lewis’ book “Moneyball” about the 2002 Oakland A’s and its general manager Billy Beane, who hired  a Yale economics grad, Paul DePodesta, to assist him in building a successful small market (i.e., low payroll) baseball team—helped make the MLB, from top-to-bottom, more competitive.

In the movie based on Lewis’ book, starring Brad Pitt and Jonah Hill, this scene perfectly summarizes the value of analytics in baseball (and, frankly, could apply to almost every major industry):

Peter Brand (aka. Paul DePodesta, as played by Jonah Hill):

“There is an epidemic failure within the game to understand what is really happening and this leads people who run major league baseball teams to misjudge their players and mismanage their teams…

…People who run ball clubs think in terms of buying players. Your goal shouldn’t be to buy players. Your goal should be to buy wins, and in order to buy wins you need to buy runs.

The Boston Red Sox see Johnny Damon and they see a star who’s worth seven-and-a-half million dollars. When I see Johnny Damon what I see is an imperfect understanding of where runs come from. The guy’s got a great glove and he’s a decent lead-off hitter. He can steal bases. But is he worth seven-and-a-half million a year?

Baseball thinking is medieval. They are asking all the wrong questions.”

While Beane and DePodesta may have lacked world championships after they introduced analytics into the process, the A’s did have nine winning seasons from 2002 to 2016 during their tenure, which is phenomenal for a small-market, low-payroll team.

At the team-level, the 21st-century A’s are the embodiment of how analytics can help an organization.

But is Epstein still right? Has analytics hurt baseball at the aggregate level?

Let us look at the facts…

Major League Baseball has a Problem

Regardless of the veracity of Epstein’s indictment of analytics for its net role in hurting the game of baseball, does professional baseball have a problem?

The answer is a qualified ‘Yes.’

These two metrics describe the bulk of the problem: (1) Average per game attendance and (2) World Series TV viewership. Since the mid-1990s, baseball game attendance relative to the total U.S. population has been in a near constant decline, going from a high of 118 game attendees (per 1 million people) in the mid-1990s to 98 game attendees (per 1M) in the late 2010s (see Figure 1). At the same time, the long-term trend is still positive. That cannot be discounted.

Figure 1: MLB per game attendance (per 1 million people) (Source: baseball-reference.com)

While the relative decline is significant, the real story of MLB attendance since the league’s inception in the late-19th century is the surge in attendance after World War II, a strong decline after that until the late-1960s, and a resurgence during the 1970s and 80s. In comparison, the attendance decline per capita since the mid-1990s has been relatively small.

Consider also that despite a per capita decline in game attendance since the 1990s, total season attendance has still grown. In 1991, 56.8 million MLB tickets were sold; by 2017, 72.7 million tickets were sold. This increase in gross ticket sales has been matched by a steady rise in MLB ticket prices as well. The average cost of an MLB baseball game in 1991 was $142, but by 2017 that figure increased to $219 (a 176 percent increase). In that context, the 15 to 20 percent decline in game attendance (per capita) seems more tolerable and far from catastrophic. In fact, if it weren’t for this next metric, baseball might be in great shape, even if its relative popularity is in decline.

The TV ratings and viewership for MLB’s crown jewel event, the World Series, has been in a near straight-line decline since the mid-1970s when Billy Martin’s New York Yankees and the Tommy Lasorda-led Los Angeles Dodgers were the sport’s dominant franchises, and happened to be in this nation’s two largest cities. Big market teams in the World Series is always good for TV ratings.

As seen in Figure 2, average TV viewership for the World Series (the orange line) has declined from a high of 44.3 million in 1978 (Yankees vs. Dodgers) to just under 9.8 million in the last World Series (Dodgers vs. Rays).

Figure 2: The TV Ratings and Viewership (average per game) for the World Series since 1972 (Source: Nielsen Research)

Even with the addition of mobile and online streaming viewers—-which lifts the 2020 World Series viewership number to 13.2 million—the decline in the number of eyeballs watching the World Series since the 1970s has been dramatic.

In combination with the trends in game attendance, the precipitous decline in live viewership offers one clear conclusion: Relatively fewer people are going to baseball games or watching the them on TV or the internet. That’s a formula for an impending financial disaster among major league baseball franchises.

While stories of baseball’s imminent death are exaggerated, baseball does have serious problems. But what are they exactly? And how has analytics impacted those probable causes?

Are baseball’s problems bigger than the game itself?

Before looking within the game of baseball itself (and the role of analytics) to explain its relative popularity decline, we must consider the broader context.

Sports fans today demand something different from what MLB offers

Living with a teenage son who loves the NBA and routinely mocks my love of baseball, I see a generational divide that will challenge any attempt to update a sport once considered, without debate, to be America’s pastime. Kids (and. frankly, many of their parents) don’t have the patience or temperament to appreciate the deep-rooted intricacies of a game where players spend more time waiting than actually playing. Only 10 percent of a baseball game involves actual action, according to one study. For kids raised on Red Bull and Call of Duty, baseball is more like a horse and buggy than a Bugatti race car.

And the in-game data supports that assertion. In 1970. a nine-inning major league baseball game took, on average, two-and-a-half hours to complete. In 2020, it takes three hours and six minutes. By comparison, a World Cup soccer match takes one hour and 50 minutes from the moment the first whistle blows. An NBA game takes about two-and-a-half hours.

Baseball is too slow…and getting slower.

[For a well-constructed counterargument to the ‘too slow’ conclusion, I invite you to read this essay.]

In contrast, the NBA and World Cup soccer possess near constant action. Throw in e-games (if you consider those contests a sport) and it is reasonable to conjecture that baseball is simply a bad fit for the times. Even NFL football, whose average game takes over three hours, has challenges in that regard.

Did analytics lead to longer baseball games? Let us examine the evidence.

Figure 3 shows the long-term trend in the length of 9-inning MLB games divided into baseball ‘eras’ as defined by Mitchell T. Woltring, Jim K. Rost, Colby B. Jubenville in their 2018 research paper published by Sports Studies and Sports Psychology. They identified five distinct eras in major league baseball: (1) “Dead Ball” (1901 to 1919), (2) “Live Ball” (1920 to 1941), (3) “Integration” (1942 to 1960), (4) “Expansion” (1961 to 1976), (5) “Free Agency” (1977 to 1993), (6) “Steroids” (1994 to 2005) and (7) “Post-Steroids” (2006 to 2011). However, for this essay, I relabeled their ‘post-Steroids era’ as the ‘Analytics era’ and extended it to the present.

(Note: MLB game length was not consistently measured until the “Integration era.”)

Figure 3: Average length of a 9-inning MLB game since 1946.

Though I will share upon request the detailed statistical analysis of the intervention effects of the baseball eras on the average length of MLB games, the basic findings are straightforward:

(1) The average length of 9-inning MLB games significantly increased during the ‘Integration,’ ‘Free Agency,’ and ‘Analytic’ eras, but did not increase during the ‘Expansion’ and ‘Steroids’ eras.

(2) The long-term trend was already pointing up before the ‘Analytics era’ (+50 seconds per year), though analytics may have had a larger marginal effect on game length (+78 seconds per year).

As to why the ‘Analytics era’ saw an increase in game times, one suggested explanation is that the ‘Steroids era’ disproportionately rewarded juiced-up long-ball hitters who tended to spend less time at the plate. In contrast, though the ‘Analytics era’ also has emphasized home run hitting, the players hitting home runs are now more patient. According to baseball writer Fred Hofstetter, pitchers have also changed:

“This (increase in game times) won’t surprise anyone who follows the game closely. The general demographic change trending into 2020:

  1. Patient hitters are replacing free swingers
  2. Hard-throwing strikeout-getters are replacing pitch-to-contact types

Pitchers who throw harder tend to take more time between pitches.9 Smart hitters take more pitches. There are more pitches with more time between them. The result is a rising average of time between pitches.”

Are these changes in the game related to analytics? It is hard to know given the concurrent (and assumed) decline in steroid use in the 2000s MLB, but the apparent consensus is that the pitcher-batter dynamics since 2000 have been more sophisticated and time-consuming than during the ‘Steroid era.’

My conclusion on the impact of analytics on the length of MLB baseball games: Unclear.

Are there other aspects of baseball affected by analytics?

Investigating the role of analytics in 21st century baseball is complicated by the confounding effects of other changes going on in the game around the same time — the most obvious being MLB’s increased enforcement of its performance enhancing drug policies. But sports writer Jeff Rivers notes another ongoing trend: this country’s best athletes are increasingly choosing football and basketball over baseball, though this trend may have been going on for some time.

“Major League Baseball used to offer its athletes the most prestige, money and fame among our nation’s pro team sports, but that hasn’t been true for decades,” writes Rivers. “Consequently, Major League Baseball continues to lose in the competition for talent to other major pro team sports.”

It is also possible analytics have exacerbated this supposed decline in athlete quality by discouraging some of baseball’s most exciting plays.

“The focus on analytics in pro sports has led to more scoring in the NBA…but fewer stolen bases and triples, two of the game’s most exciting plays, in pro baseball,” asserts Rivers.

Is there really a distinct ‘Analytics era’ in baseball?

Another problem in assessing the role of baseball analytics is that the ‘Analytics era’ (what I’ve defined as 2006 to the present) may not be that distinct.

Henry Chadwick invented the baseball box score in 1858 and, by 1871, statistics were consistently recorded for every game and player in professional baseball. In 1964, Earnshaw Cook published his statistical analysis of baseball games and players and seven years later the Society for American Baseball Research (SABR) was founded.

In the early 1970s, as statistics advanced as a topic among fans, Baltimore Orioles player Davey Johnson was writing FORTRAN computer code on an IBM System/360 to generate statistical evidence supporting his belief that he should bat second in the Orioles lineup (his manager Earl Weaver was not convinced, however).

In 1977, Bill James published his first annual Baseball Abstracts which, through the use of complex statistical analyses, argued that many of the popular performance metrics — such as batting average — were poor predictors of how many runs a team would score. Instead, James and other SABRmetricians (as they would be called) argued that a better measure of a player’s worth is his ability to help his team score more runs than the opposition. Instead, the SABRmetricians initially preferred metrics such as On-Base Percentage (OBP) and Slugging Percentage (SLG) to judge player values and would later prefer combining those metrics to create the On-base Plus Slugging (OPS) performance metric.

[Note: OBP is the ratio of the batter’s times-on-base (TOB) (which is the sum of hits, walks, and number of times hit by pitch) to their number of plate appearances. SLG measures a batter’s productivity and is calculated as total bases divided by at bats. OPS is simply the sum of OBP and SLG.]

Batting averages and pitchers’ Earned-Run-Averages (ERA) have been a systematic part of player evaluations since baseball’s earlier days. Modern analytics didn’t invent most of the statistics used today to assess player value, but merely refined and advanced them.

Nonetheless, there is something fundamentally different in how MLB players values are assessed today than in the days before Billy Beane, Paul DePodesta and Moneyball.

But when did analytics truly take over the talent acquisition process in major league baseball? There is no single, well-defined date. However, many baseball analysts point to the 2004 Boston Red Sox, whose general manger was Theo Epstein, as the first World Series winner to be significantly driven by analytics.

Something unique and profound was going on in major league baseball’s front offices from the time between Billy Beane’s 2002 A’s and the Boston Red Sox’ 2007 World Series win, their second championship in four years.

By 2009, most major league baseball teams had a full-time analytics staff working in tandem with their traditional scouting departments, according to Business Administration Professor Rocco P. Porreca.

So, why did I pick 2006 as the start of the ‘Analytics era’? No definitive reason except that is roughly the halfway point between the release of Lewis’s book Moneyball and 2009, the point at which most major league baseball teams had stood up a formal analytics department. It would have been equally defensible to set 2011 or 2012 as the starting point for the ‘Analytics era’ as many of the aggregate baseball game measures we are about to look at changed direction at around that time.

The Central Mantra of Baseball Analytics: “He get’s on base”

Lewis’ book Moneyball outlined the baseball player attribute 2002 A’s assistant general manger Paul DePodesta’s sought after most when evaluating talent: Select players that can get on base.

This scene from the movie Moneyball drives home that point:

As the 2002 A’s scouting team identify acquisition prospects, the team’s general manger, Billy Beane singles out New York Yankees outfielder David Justice:

A’s head scout Grady Fuson:  Not a good idea, Billy.

Another A’s scout:  Steinbrenner’s so pissed at his decline that he’s willing to eat a big chunk of his contact just to get rid of him.

Billy Beane:  Exactly.

Fuson: Ten years ago, David Justice—big name. He’s been in a lot of big games. He’s gonna really help our season tickets early in the year, but when we get in the dog days in July and August, he’s lucky if he’s gonna hit his weight…we’ll be lucky if we get 60 games out of him. Why do you like him?

[Beane points at assistant general manager Peter Brand (aka. Paul DePodesta)]

Peter Brand: Because he get’s on base.

This was the fundamental conclusion analytic modelers started driving home to a growing number of baseball general managers after 2002.  Find players that can get on base.

And Theo Epstein was among the first general managers to drink the analytics Kool-Aid and he did it while leading one of baseball’s richest franchises — the Boston Red Sox. Shortly after the 2002 World Series, the Red Sox hired the 28-year-old Epstein, the youngest general manager in MLB history, to help them end their 86-year World Series drought. Two years later, the Red Sox and Epstein did just that, and one of the reasons cited for the Red Sox success was Epstein’s use of analytics for player evaluations. Eventually, Epstein would take his analytics to the Chicago Cubs in 2011, who then ended their 108-year championship drought five years later.

Until Epstein’s departure from the Cubs, there has been scant debate within baseball about the value of analytics. Almost every recent World Series champion– the Red Sox, Cubs, Royals, Astros, and others — has an analytics success story to tell. By all accounts, its here to stay.

So why on his way out the door in Chicago did Epstein throw a verbal grenade into the baseball fraternity by suggesting analytics have had “a negative impact on the aesthetic value of the game and the entertainment value of the game.” And he specifically cited the responsibility of analytics for the recent rise in strikeouts, bases-on-balls, and home runs (as well as a decline in stolen bases) as the primary cause of baseball’s aesthetic decline.

Is Epstein right? The short answer is: It is not at all clear baseball analytics are the problem, even if it did change the ‘aesthetics’ of the game.

A brief look at the data…

As a fan of baseball, I find bases-on-balls and strike outs near the top of my list of least favorite in-game outcomes.

But when we look at the long-term trends in walks and strike outs, its hard to pin the blame on analytics (see Figure 4). Strike outs in particular have been on a secular rise since the beginning of organized baseball in the 1870s, with only three periods of sustained decreases — the ‘Expansion era,’ ‘Live Ball’ and ‘Steroid’ eras. The ‘Analytics era’ emphasis on hard-throwing strike out pitchers over slower-throwing ‘location’ pitchers may be working (strike outs have gone from 6 to 9 per team per game), but it is part of baseball’s longer-term trend — baseball pitchers have become better at striking out batters since the sport’s beginning. The only times batters have caught up with pitching is when either the baseball itself was altered (“Live Ball era”), pitching talent was watered down (“Expansion era”) or the batters juiced up (“Steroids era”).

As for the rise in bases-on-balls, there is evidence of a trend reversal around 2012, with walks rising sharply between 2012 and 2020, the heart of the ‘Analytics era.’ At least tentatively, therefore, we can conclude one excitement-challenged baseball event has become more prominent, but even in this case, the current number of walks per team per game (= 3.5) is near the historical average. At the bases-on-balls peak in the late-1940s, baseball was at its apex in popularity and MLB attendance declined as bases-on-balls plummeted through the 1950s (see Figure 1).

Figure 4: Trends in Bases-on-Balls and Strike Outs in Major League Baseball since 1871.

It is difficult to blame baseball’s relative decline in popularity on increases in strike outs and walks or the role of analytics in those in-game changes.

But what about two of baseball’s most exciting plays — stolen bases and home runs? According to Epstein, the analytics-caused decline in stolen bases and concomitant rise in home runs has robbed the game of crucial action which help drive fan excitement.

As shown in Figure 5, there is strong evidence that the ‘Analytics era’ has seen a reversal in trends for both stolen bases and home runs. Since 2012, the number of home runs per team per game has risen from 0.9 to 1.3, and the number of steals per team per game has fallen from 0.7 to 0.5.

Stolen bases may be a rarity now in baseball, but they’ve never been common since the ‘Live Ball era,’ having peaked around 0.9 per team per game in the late 1980s. In truth, stolen bases have never been a big part of the game.

Home runs are a different matter. Epstein’s complaint that there are, today, too many home runs in baseball is a puzzling charge. In 45 years as a baseball fan, I’ve yet to hear a fan complain that his or her team hit too many home runs.

Yes, home runs eliminate some of the drama associated with hitting a ball in play — Will the batter stretch a single into a double or a double into a triple? Will the base runner go for third or for home? — but do those in-game aesthetics create more adrenaline or dopamine than the anticipation over whether a well hit ball will go over the fence? I, personally, find it hard to believe that too many home runs are hurting today’s baseball.

But is Epstein right in saying analytics may have played a role in the recent increases in home runs. The answer is an emphatic yes.

As the MLB worked to remove steroids from the game in the late 1990s, the number of home runs per game dropped dramatically…until 2011. As the ‘Analytics era’ has become entrenched in baseball, home runs have increased year-to-year as fast as they did during the heyday of steroids, rising from 0.9 per game per team in 2011 to 1.3 in 2020. In an historical context, professional baseball has never seen as many home runs as it does today.

However, again, in the long-term historical context, the ‘Analytics era’ is just continuing a trend that has existed in baseball since its earliest days. Most batters have always coveted home runs and all pitchers have loathed them — analytics didn’t cause that dynamic.

Figure 5: Trends in Stolen Bases and Home Runs in Major League Baseball since 1871.

The holy grail of baseball analytic metrics is On-base Slugging (OPS) — a comprehensive measure of batter productivity that incorporates more information about how often a batter has multiple base hits.

(and from a defensive perspective, an indicator of how well a team’s pitching and fielding lineup stunts batter productivity).

The highly-regarded OPS is important to baseball analytic gurus because of its strong correlation with the proximal cause of why teams win or lose: The number of runs they score.

Since 1885, the Pearson correlation between OPS and the number of runs per game is 0.56 (which is highly significant at the two-tailed, 0.05 alpha level). And it is on the  OPS metric that the ‘Analytics era’ has made a surprisingly modest impact, hardly large enough to be responsible for harming the popularity of baseball (see Figure 6). If anything, shouldn’t a higher OPS in the aggregate indicate a more exciting type of baseball, even if it includes a larger number of home runs?

Prior to the ‘Analytics era,’ the ‘Steroids era’ (1994 to 2005) witnessed a comparable surge in OPS (and home runs) and the popularity of baseball grew, at least until stories of steroids-use became more prominent in sports media.

Figure 6: Trends in On-base Plus Slugging (OPS) and # of Runs in Major League Baseball since 1871.

Epstein’s pinning baseball’s current troubles on analytics begs the question of what other factors could also be explaining some of the recent changes in the game’s artfulness. These in-game modifications cannot all be dropped at the feet of analytics. The slow pruning out of steroids from the game, shifts in baseball’s young talent pool, the changing tastes of American sports fans, and the growth in other sports entertainment options cannot be ignored.

Final Thoughts

Baseball has real problems, particularly with the new generation of sports fans. The MLB should not under-estimate the negative implications of this problem.

However, the sport is not dying and analytics is not leading it towards a certain death. Analytics did not cause baseball’s systemic problems.

For those who assume major league baseball is a sinking ship, analytics has done little more than re-arrange the deck chairs on the Titanic. However, for those of us who believe baseball is still one of the great forms of sports entertainment, we must admit the sport is dangerously out-of-touch with the modern tastes and appetites of the average American sports fan.

And though analytics may not have helped the sport as much as Moneyball suggested it would, neither has it done the damage Epstein suggests.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Postcript:

This is my favorite scene from Moneyball. It is the point at which head scout Grady Fuson (played by Ken Medlock) confronts Billy Beane (Brad Pitt) over his decision-making style as general manager. Most Moneyball moviegoers (and readers of Lewis’ book) probably view Fuson as the bad guy in the film — a dinosaur unwilling to change with the times. As a statistician whose faced similar confrontations in similar contexts, I see Fuson as a irreplaceable reality check for data wonks who believe hard data trumps experience and intuition. In my career, I found all of those perspectives important.

Fuson asks Beane into the hallway so he can clear the air. Fuson then says to Beane:

“Major League Baseball and its fans would be more than happy to throw you and Google boy under the bus if you keep doing what you’re doing here. You don’t put a team together using a computer.

Baseball isn’t just numbers. It’s not science. If it was, anybody could do what we’re doing, but they can’t because they don’t know what we know. They don’t have our experience and they don’t have our intuition.

You’ve got a kid in there that’s got a degree in economics from Yale and you’ve got a scout here with 29 years of baseball experience.

You’re listening to the wrong one now. There are intangibles that only baseball people understand. You’re discounting what scouts have done for a hundred and fifty years.”

Years later, Fuson would react to how he was portrayed in Lewis’ book and subsequent movie:

“When I was a national cross-checker, I raised my hand numerous times and said, ‘Have you looked at these numbers?’ I had always used numbers. Granted, as the years go on, we’ve got so many more ways of getting numbers. It’s called ‘metrics’ now. And metrics lead to saber-math. Now we have formulas. We have it all now. But historically, I always used numbers. If there’s anything that people perceived right or wrong, it’s that me and Billy are very passionate about what we do. And so when we do speak, the conversation is filled with passion. He even told me when he brought me back, ‘Despite what some people think, I always thought we had healthy, energetic baseball conversations.’”

At times I think people want to believe analytics and professional intuition are mortal enemies. In my experience, one cannot live without the other.