Category Archives: Opinion

Nursing home liability waivers may have been our nation’s worst public policy during the COVID-19 pandemic

By Kent R. Kroeger (Source: NuQum.com; August 1, 2021)

The ultrastructural morphology exhibited by the 2019 Novel Coronavirus (2019-nCoV) (Image created by CDC/ Alissa Eckert, MS; Dan Higgins, MAM; This image is a work of the Centers for Disease Control and Prevention, part of the United States Department of Health and Human Services, taken or made as part of an employee’s official duties. As a work of the U.S. federal government, the image is in the public domain.)

[All data used in this essay is available on GITHUB here.]

Last week, the Biden Justice Department decided not to open a civil rights investigation into the possibility that New York state officials intentionally manipulated data regarding nursing home deaths in an attempt to obfuscate the relatively high number of COVID-19 deaths within nursing homes.

A Justice Department letter sent by Deputy Assistant Attorney General Joe Gaeta to congressional Republican lawmakers read: “Based on that review, we have decided not to open a Civil Rights of Institutionalized Persons Act investigation of any public nursing facility within New York at this time.”

A similar letter was sent to state officials in Michigan and Pennsylvania. Apparently, the Justice Department is still considering investigations in New Jersey nursing homes.

Though New York Governor Andrew Cuomo’s office did not immediately respond to press inquiries about the Justice Department decision, it would not surprising if Governor Cuomo is feeling some vindication right now.

Unfortunately, the COVID-19 data continues to indicate something unusual may have happened in states such as New York, New Jersey, Massachusetts, Michigan and Pennsylvania where nursing homes were required to take in elderly patients previously hospitalized for COVID-19. To ensure cooperation among nursing home operators, this policy was supported by state laws or governors’ orders granting nursing homes and other long-term care facilities legal immunity from prosecutions related to COVID-19 deaths.

In the first few months of the pandemic, at least 19 states had some form of a nursing home liability waiver: Alabama, Arizona, Connecticut, Georgia, Illinois, Kentucky, Louisiana, Massachusetts, Michigan, Mississippi, Nevada, New Jersey, New York, North Carolina, Pennsylvania, Rhode Island, Vermont, Virginia, and Wisconsin.

Richard Mollot, executive director of the New York-based Long Term Care Community Coalition — a group that advocates for nursing home residents across the country — offers a grim critique of these liability waivers: “It’s basically a license for neglect.”

The state-level COVID-19 data summarized below will bear out Mollot’s concern.

It is not shocking that the Biden Justice Department wants to avoid risking political damage to Democratic governors heading into the 2022 midterm elections. But it is surprising that there is no groundswell among news organizations, government health experts, and academic researchers to demand answers on how the nursing home liability waivers may have impacted the number of COVID-19 deaths during this pandemic — if one of your loved ones was among the more than 15,800 New York nursing home residents who have died from COVID-19, you might be particularly interested in the answer.

An in-depth inquiry into the impact of these liability waiver policies does not need to imply criminal conduct. Cuomo, better than other governors, aptly defended the nursing home liability waivers when New York lawmakers began to question their rationality during the pandemic’s first wave.

At the time, Cuomo correctly noted that ICU beds in New York were rapidly being filled by COVID-19 patients and there was an urgent need to move recovering patients out of hospitals to other care environments. Wholly unfair and irresponsible are insinuations by some (including me) that Cuomo may have implemented the policy, in part, as a favor to nursing home owners who had historically been generous donors to Democratic candidates. Partisan politics brings out the worst in us all.

And if ever there is a time to avoid a partisan inquisition, an open and comprehensive investigation into the impact of the nursing home liability waivers on COVID-19 deaths would be that time. Considering four of the 19 states with nursing home liability waivers at the start of the pandemic had Republican governors, no political party can universally claim the moral high ground.

Nursing home liability waiver laws are not all the same, so assessing their collective impact is not necessarily straightforward

In an August 2020 analytic essay, I concluded that a state’s population density was the strongest correlate with the relative number of COVID-19 deaths in that state (from 18 January 2020 to 13 August 2020), but that nursing home liability waivers were also a strong correlate, associated with perhaps as many as 22,500 additional deaths over the period.

Has anything changed since then to require a different conclusion regarding these liability waivers? As with any public policy whose impact is closely monitored, bureaucrats and politicians can incorporate new learning into revising those policies. That happened with New York’s liability waiver policy soon after it became apparent — in mid-April 2020 — that a disproportionate number of COVID-19 deaths were occurring among nursing home residents. In early May, Cuomo announced that New York would no longer require nursing homes to take COVID-19 patients from hospitals.

“We’re just not going to send a person who is positive to a nursing home after a hospital visit. Period,” Cuomo said at a news conference announcing the policy change.

But the damage in human lives was already done and the extent to which other states with similar liability waiver policies changed their implementation policies appears varied. Connecticut, for example, decided to move COVID-19 positive elderly patients from hospitals to nursing homes dedicated to housing only COVID-19 patients. Healthy residents were not mixed with COVID-19 positive residents.

But, by all indications, the liability waivers remain intact, even while states have stopped moving patients from hospitals to nursing homes. In fact, some states passed their own liability waiver laws in the aftermath of New York’s tragic experience with the law. Iowa, for example, passed a liability waiver law in June 2020 giving businesses, such as nursing homes and medical facilities, protection from civil lawsuits regarding COVID-19. In July 2020, my mother — who was in her 90s — contracted COVID-19 while in an Iowa nursing home and was initially moved to a hospital, but returned to her nursing home days later, despite still suffering significant symptoms from the illness. She thankfully survived, but the experience now appears indicative of the risks associated with liability waiver laws.

Since June 2020, Iowa has had one of the highest COVID-19 death rates in the country and it is fair to ask if Mollott and other critics of liability waiver policies are correct in suggesting they are a “license for neglect.”

The circumstantial evidence suggests nursing home liability waivers cost some people their lives

It is hard to avoid the conclusion that early in this pandemic a significant number of elderly Americans died from COVID-19 because they were moved prematurely from hospital care to a nursing home.

Figure 1 (below) shows the state rankings for COVID-19 deaths per 1 million people (as of 27 July 2021). Nine out of the top 10 states in COVID-19 deaths per capita were states that had a nursing home liability waiver law in place at the start of the pandemic.

Figure 1: Top 20 U.S. states with highestCOVID-19 death rates through 27 July 2021

Source: RealClearPolitics

As noted, by May 2020, states such as New York, New Jersey and Connecticut had made significant changes in their implementation of nursing home liability wavier laws. In most cases, they either stopped forcing nursing homes to take on elderly COVID-19 hospital patients or segregated nursing homes by COVID-19 negative and positive residents.

Subsequently, it may be interesting to see if, statistically, there has been a noticeable drop in COVID-19 death rates in the 19 states that had liability waivers at the start of the pandemic.

Methodological Note: At least 10 states worked to pass nursing home liability waiver laws after the start of the pandemic. At the time of writing this analytic essay, I had not determined exactly how many of those states actually passed such laws and therefore leave the analysis of their possible impact on COVID-19 death rates for a follow-up essay.

Figure 2 (below) shows the ranking of U.S. states (+D.C.) by COVID-19 death rates from 18 January 2020 to 31 May 2020 (Pandemic Phase 1). The red bars indicate states that had nursing home liability waivers laws in place at the start of the pandemic. The point biserial correlation statistic — appropriate for correlating a continuous variable with a binary variable (see Appendix A for its calculation) — between a state’s COVID-19 death rate in Phase 1 and whether it had a liability waiver law was 0.49 (significant at p<.05).

Figure 2: COVID-19 deaths per 1 million people from Jan. to May 2020 (Phase 1)

Data Sources: New York Times (COVID-19 data) and Time Magazine (Liability Waiver data)

To test whether nursing home liability waiver laws continued to impact COVID-19 death rates after May 2020, I calculated the point biserial correlation using data from 1 June 2020 to 7 July 2021 (Pandemic Phase 2). The resulting statistic was 0.24 (significant at p<.10).

Figure 3: COVID-19 deaths per 1 million people from 1 June 2020 to 7 July 2021 (Phase 2)

Data Sources: New York Times (COVID-19 data) and Time Magazine (Liability Waiver data)

Apparently, the relationship between liability waiver laws and COVID-19 death rates weakened during the pandemic’s Phase 2, but may not have disappeared completely.

Notable in Figure 3 is how New York and Connecticut performed better than most other states in terms of COVID-19 death rates during Phase 2, but New Jersey, Massachusetts, Illinois continued to experience above average death rates. Also notable is the extremely poor COVID-19 death rate performance by states such as Arizona, Mississippi, South Dakota, Alabama and Arkansas — the explanation for which most likely goes beyond liability waiver laws and probably includes cultural, economic, and public policy factors.

Figure 4 shows a scatterplot of COVID-19 death rates by the two pandemic phases. Again, the states with liability waiver laws are in red and which tend to cluster in the upper right-hand quadrant of the scatterplot (indicating high COVID-19 death rates in both pandemic phases).

Figure 4: Scatterplot ofCOVID-19 deaths per 1 million people (Phase 1 versus Phase 2)

Data Sources: New York Times (COVID-19 data) and Time Magazine (Liability Waiver data)

Nothing presented so far is conclusive about the impact of liability waiver laws, but the results are suggestive of something potentially consequential.

A mediation model of COVID-19 deaths per capita

The final statistical analysis I ran attempted to account for the other major factors that may have impacted state-level COVID-19 death rates (from 18 January 2020 to 7 July 2021), including nursing home liability waivers.

The other factors considered in the model were:

  • Population Density (natural log)
  • Trump vote percentage in 2016 — a proxy variable for the various COVID-19 policies that tended to cluster based upon Red States vs. Blue States.
  • Stay-at-Home orders — an additional policy control based on whether states ever instituted stay-at-home orders during the pandemic.
  • GDP per capita (natural log) — an indication of a state’s level of economic activity
  • Percent uninsured — an indication of the percentage of a state’s citizens lacking health care resources.

The model’s mediating variable through which these factors may have indirectly operated on COVID-19 death rates was the number of COVID-19 cases per 1 million people.

The mediation model was estimated in JASP and whose total effects results and path plot are summarized in Figures 5 and 6 below.

[The full model estimates are in Appendix B.]

Figure 5: Mediation model of COVID-19 deaths per capita — Total Effects (18 January 2020 to 7 July 2021)

Figure 6: Pathmodel of COVID-19 deaths per capita (Jan. 2020 to 7 July 2021)

LN_D = LOG(Deaths per 1M); LN_C = LOG(Cases per 1M); PCT = Pct. Uninsured; LN_G = LOG(GDP per capita); STA = States w/ Stay-at-Home Orders; TRU = Trump Vote % in 2016; NUR = Nursing Home Liability Waiver states; LN_P = LOG(Population Density)

The model results indicate a strong relationship between state-level COVID-19 death rates, population density, and Red State/Blue State status (as measured by the 2016 Trump vote percentage): Densely-populated states (z = 4.89; p < .001)and Red States (z = 4.24; p < .001) had significantly higher COVID-19 death rates over the study period.

[Overall, this model explained 78 percent of the state-level variance in COVID-19 death rates.]

Densely-populated states like New York and New Jersey, however, should not feel exonerated by these results. Nursing home liability waiver states had significantly higher COVID-19 death rates. In fact, it was the most powerful correlate with COVID-19 death rates (z = 2.83; p = 0.005), after population density and Red State/Blue State status.

The other statistically significant variable in the model was the percentage of a state’s residents without health insurance (z = 2.04; p = 0.04).

While not statistically significant, there is a weak indication that economically active states — that is, state’s with high GDP-per-capita — also had slightly higher COVID-19 death rates. It is possible, an active economy inevitably brings with it higher mortality risks, irrespective of a state’s policy attempts to control the spread of a pathogen. Any benefits a state’s citizens might gain from having more financial resources to protect themselves from a dangerous virus may be dampened by the state’s higher levels of economic activity.

Final Thoughts: Our politicians, media, and government experts have a credibility and honesty problem

These results are admittedly suggestive, not conclusive. For example, methodological improvements — such as accounting for the count-data nature of the dependent variable — must be considered.

Nonetheless, it is unimaginable how anyone could look at the COVID-19 death rates and not see how major policy-related differences have been associated with variation in state-level outcomes. Equally apparent, the policy failures cut across partisan lines. Democratic-dominated states made major policy mistakes, as did Republican-dominated states.

And the previous statistical analysis doesn’t even consider the economic and social damage done by states through their COVID-19 policy strategies (lockdowns, mask mandates, crowd limits, public health care, school closings, etc.).

But the policy failures go far beyond liability waiver laws or health insurance coverage. The failures are at every level of government and society.

At one point, we had the leading virologist in the federal government tell us masks were not effective in slowing the spread of SARS-CoV-2, despite privately knowing they were, only to publicly reverse that recommendation — thereby damaging the credibility of anything he said later, no matter how factually correct. Do I harshly judge Americans now hesitant to heed Dr. Fauci’s advice about getting vaccinated? I do not.

Do I personally trust Dr. Fauci when he says the relatively new mRNA vaccine technology has no long-term health implications? For good reason, I do not— though I was still among the earliest to receive both “jabs” (Moderna) and did not hesitate to get my teenage son vaccinated (and he had previously contracted the virus).

This same government scientist slandered and shamed anyone suggesting the SARS-CoV-2 could have leaked from a Chinese lab, only to admit many months later that such a possibility could not be ruled out.

Credibility and honesty are everything in a public crisis like the COVID-19 pandemic. Occasionally being wrong is forgivable (and expected). But to fail on both accounts, as Dr. Fauci and other political-media elites have done more than once, is public service ineptitude.

Dr. Fauci should have been fired after his lies to Congress about mask-wearing — and certainly should have been fired after backtracking on the ‘lab leak’ hypothesis — but in today’s partisan environment, he’s instead lionized.

Dr. Fauci’s disingenuous behavior is more than matched, however, by the rogues gallery of Republican governors — Arizona’s Doug Ducey, Alabama’s Kay Ivey, Mississippi’s Tate Reeves, Iowa’s Kim Reynolds, and South Dakota’s Kristi Noem are the first names that come to mind — that chose to ignore common sense state measures — stay-at-home orders, mask mandates, and crowd size restrictions — at the expense of citizens’ lives.

Recall the 20 states in Figure 1 with the highest COVID-19 death rates. The high uninsured rates in southern U.S. states is bad enough — the likely product of systemic poverty and racism — but it is unfathomable how states with the population density and socioeconomic advantages of the two Dakotas and Iowa could still end up in the Top 20 for COVID-19 death rates. California Governor Gavin Newsom faces a serious recall challenge in a state that has done reasonably well, both in terms of COVID-19 deaths and economic impact, but those three Midwest governors remain overwhelmingly popular in their respective states (IowaSouth DakotaNorth Dakota).

Life and politics are not always fair.

Is it coincidence the Americans who disproportionately died during this pandemic — elderly, working class, overweight, Black and Hispanic Americans — are also underrepresented by the policies coming out of our political establishment during this pandemic?

For evidence of this claim, I present nursing home liability waivers — what may be the pandemic’s worst public policy.

  • K.R.K.
  • I am a survey and statistical consultant with over 30 -years experience measuring and analyzing public opinion.
  • Send comments to: kroeger98@yahoo.com

Appendix A: Point Biserial Correlation

Appendix B: Full Mediation Model

Dependent Variable = COVID-19 Deaths per 1 million people

Censoring Iranian (and Palestinian and Houthi) news sites is un-American

By Kent R. Kroeger (Source: NuQum.com; June 28, 2021)

On Tuesday, June 22, the US government “seized” several Iranian, Palestinian and Houthi news websites, alleging their involvement in spreading “disinformation” and calling them a “threat to national security”.

DUBAI, June 22 (Reuters) — The U.S. Justice Department said on Tuesday it seized 36 Iranian-linked websites, many of them associated with either disinformation activities or violent organizations, taking them offline for violating U.S. sanctions.

Several of the sites were back online within hours with new domain addresses.

Writer’s note: It should be acknowledged that the websites blocked by this U.S. action were on servers owned or controlled by U.S. companies — hence, a violation of U.S. sanctions against Iran.
__________________

There was a time when another country — the Soviet Union — had the genuine ability to, for all practical purposes, destroy my country in an all-out nuclear war. [That ability, in what is now Russia, still exists today — but somehow the threat doesn’t seem as palpable as it did before the downfall of the Soviet Union.]

The Soviet Union was a legitimate superpower, hostile to the U.S. and its allies, and armed with around 5,000 nuclear warheads in the late 1970s (with a good number pointed directly at the U.S. mainland).

As a teenager at the time in Cedar Falls, Iowa, I was able to connect my Heathkit shortwave radio receiver to a 120-foot antenna running in a slope from my bedroom window to an old ceramic insulator attached to a pole at one end of my family’s clothes line.

Thin strips of colored tape identified frequencies where I could listen to worldwide radio broadcasts of stations such as the BBC’s World Service, Deutsche Welle (West Germany), Radio Peking, Radio Havana, Voice of the Arabs (Radio Cairo), Radio Berlin (East Germany), Radio Budapest, and last, by far from least, Radio Moscow.

I would get chills hearing the vibraphone intro to the Radio Moscow newscasts, a tune (I think it was called “Moscow Nights”) that was slow, sweet and oddly melancholy. [Regrettably, the Gorbachev-era would replace that charming interstitial with music that sounded like the theme song from a forgettable 1980s ABC crime drama. In retrospect, that was the first clear sign the Soviet Union was on its last legs.]

Some of the broadcasts were in English, but most of the them were in languages I couldn’t understand. But it didn’t matter. It was the thrill of hearing something far away (and slightly forbidden) that made me listen to that radio for hours on humid summer nights, long after my family had gone to bed.

Listening to Radio Moscow was my feeble, teenage attempt at defiance at a time when U.S.-Soviet relations seemingly were getting worse heading into the Reagan-era of American politics.

But it was more than defiance. I honestly believed it was my right (even civic duty) to hear news and opinions from the perspective of other countries — especially our “enemies.”

And it wasn’t that I found Radio Moscow more informative or trustworthy — quite the opposite, its news stories were often brazenly optimistic:

April 4, 1980: “…the Supreme Soviet Presidium ratified the treaty between Afghanistan and the Soviet Union on the conditions for the temporary stay of a limited contingent of Soviet forces in Afghanistan territory.” [Note: The size of the Russian occupation force would peak around 65,000 and they didn’t leave until 1989.]

or comically understated:

April 29, 1986: “An accident has occurred at the Chernobyl nuclear power plant as one of the reactors was damaged. Measures are being taken to eliminate the consequences of the accident.”

But that was part of Radio Moscow’s charm…and its importance.

Why would an American not want to know the mindset of an adversary? How could anyone honestly call themselves informed without knowing the official views of other governments?

“It’s disinformation!…It’s propaganda!,” is the cry we hear now from both the left and right of the political spectrum — particularly with respect to Iran or Palestinians.

Even if the Press TV (Iran) or al-Masirah TV (Yemen Houthis) or Palestine-Al Youm websites were merely publishing official propaganda, why would I want the U.S. government, or a private U.S. company, to decide for me whether I can read or hear their content?

The thing I remember about the Cold War…

One of distinctive aspects of the U.S. government during the Cold War was its irrepressible sense of superiority and invincibility.

We had, after all, soundly beaten the fascists in Germany and Japan in World War II (OK, I realize the Soviet Union was a rather important part of that victory) and economically we were second-to-none in the decades immediately following the war.

From a U.S. citizen’s perspective, while we understood our homes and city could be evaporated by a single Soviet thermonuclear device, there was a Dean Martin-like boozy confidence among us — at least until the Vietnam War and Watergate— that the American ideology of ‘freedom and capitalism’ were intrinsically superior to authoritarianism and communism.

We may have been afraid of the Soviet war machine, but we were not afraid of the Soviet way of life. It was inferior and we knew it. My mom and dad worried more about inflation than anything the Soviet Union could do to them.

Evidence of our nation’s confidence is how it addressed Soviet propaganda on platforms such as Radio Moscow. Though the Kremlin may have jammed Western broadcasts by radio services like the Voice of America and the BBC, we did not return the favor.

It wouldn’t have been cheap, but we could have done it.

According to Mark Winek, an expert on Cold War-era propaganda efforts, it demonstrated the West’s confidence that we didn’t feel the need to jam Radio Moscow in response to Soviet jamming. In contrast, the Soviets had every reason to fear Western influence:

“While Radio Moscow’s signals were rarely jammed by other nations, the Soviet Union actively jammed the broadcasts of Western stations such as the BBC and the Voice of America. The purpose of this was to prevent Soviet citizens from being able to tune in the Western broadcasters, fearing ‘Western cultural infiltration’. Indeed, they may have had cause to worry: the Voice of America estimated 8 million Soviet citizens listened into Western broadcasts.”

That is the difference between a confident, secure country and one that is not.

How times have changed.

At present, I see my country becoming fragile and increasingly paranoid towards anyone who says something mean — particularly against our two establishment political parties. We are now so delicate, our enemies aren’t just somewhere in some faraway desert or arctic tundra, they are among us. They might be one of your work colleagues or even in your own family. We now believe this so deeply that few Americans blinked when President Joe Biden asked them this month to turn these suspected ‘domestic terrorists and extremists’ into the nearest law enforcement or FBI office.

At least in the Cold War, we feared actual scary things like 15-megaton thermonuclear warheads and bureaucratic group-think. Today, some Americans turn into a mush puddle at the sight of a bare-chested man wearing a furry hat with horns and holding a Qanon sign.

Bare-chested, horn-hat guy holding a Qanon sign in Peoria, Arizona (Photo by TheUnseen011101; photo released into the public domain by its author.)

We are not the same country that once stood nose-to-nose with the Soviet menace without blinking. Today, our current administration openly prosecutes a publisher who published whistleblower document leaks regarding U.S. war atrocities (Julian Assange) and recent administrations have unapologetically monitored the communications of journalists deemed hostile (i.e., published administration leaks).

The day the U.S. Department of Justice shut down the Press TV website, the stories on its home page included stories on:

  • Iran’s military top brass expressing a willingness to cooperate with President Biden (the kind of story that makes state-run news agencies relevant to foreign diplomats and journalists),
  • The Taliban making gains in Afghanistan (undeniably true),
  • Iran’s President-elect receiving congratulatory calls from world leaders (also undeniably true),
  • and a feature on Israeli settlers attacking Palestinians in Sheikh Jarrah (a fact the Biden administration has acknowledged).

Is there propaganda on Press TV’s web pages? Of course. It is a state-run news organization after all. Does it print misinformation and promote conspiracy theories on the Qanon-level of a cadre of U.S. elites running a child slavery ring out of a pizza parlor on Connecticut Avenue? Nothing I’ve ever seen on its pages has come close to being that far removed from the sanity train.

Press TV is the English-language service of the Islamic Republic of Iran Broadcasting, the Iranian government’s news organization — which includes the Tehran-based internet radio station World Service 4 (which is also being blocked by the U.S. government).

I recently listened to a story on World Service 4 (before it was blocked) in which the Iranian government accused U.S. sanctions of directly causing unnecessary COVID-19 deaths among Iranian civilians. I don’t necessarily agree with that statement, but why would the U.S. government prevent me from hearing or reading that? Because I might believe it? That is what a government does when it is insecure about its own honesty. That is the same frame of mind behind the Iranian government suppressing Western ideas about women’s rights or press freedom. They do it because they are insecure.

It should be beneath our country to do such a thing — particularly against such a minor threat to the U.S. as Iran.

They all do propaganda — the trick is knowing how to separate the facts from the b.s.

On our own home front, we must dispense with the grammar-school-level belief that U.S. news organizations are objective and bound solely by the “truth” when crafting their daily headlines and newscasts. Most American journalists serve their economic interests as well as their social and economic class. The “truth” comes somewhere after that.

I.F. Stone famously wrote that “all governments lie.” The same can be said for all news organizations. They don’t do it all the time — not even most of the time — but when they do, they will bamboozle you like a cheating lover.

In a country where Facebook and Google appear to be suppressing news stories and social media conversations related to the generic drug Ivermectin, an antiparasitic drug with known antiviral uses and which has shown enough promise as a COVID-19 treatment to warrant the recent start of large-scale clinical studies on its effectiveness, it is myopic to suggest U.S. news and social media organizations don’t peddle in propaganda too. Sometimes propaganda is not in what is said or written, but in what is not said or written.

So, in the end, I would rather take my chances with my own capacities and limitations. Which is to say: I don’t need the U.S. government protecting me from Iranian or anyone else’s propaganda. Over 40 years of digesting content from the U.S. news media has made me a bit of an expert in picking out fact from nonsense.

  • K.R.K.

Send comments to: kroeger98@yahoo.com

Africa’s Sahel nations still paying price for Obama’s Libya policy

By Kent R. Kroeger (Source: NuQum.com; June 3, 2021)

“We have not yet finished our mission. But we do not foresee staying indefinitely. Once the sovereignty of Mali is restored, once MISMA (a UN-backed African military force) can replace our own troops, we will withdraw,” the French President told a news conference in Bamako, Mali.

Those were the words of French President François Hollande in February 2013, days after a French-led military offensive had driven Islamist rebels out of the country’s north, except for the city of Kidal.

Eight years later — French troops remain in Mali.

And in that time since President Hollande’s optimistic appraisal of the situation, Mali has weathered two coup d’états — the last occurring a little over a week ago when the Mali military, without resistance from the French military in Mali, detained the country’s president, prime minister, and defense minister. Though widely condemned in formal communiques by the European Union (EU), U.S., African Union and U.N., the likely result of this latest coup is that Mali (with the help of the French) will remain in a constant state of war.

It isn’t just Mali experiencing political instability. Since the 2011 NATO-backed revolt that brought down Colonel Muammar Gaddafi’s dictatorship in Libya, six of Africa’s 10 Sahel countries have seen at least one coup or attempted coup (Mali, Burkina Faso, Nigeria, Chad, Sudan and Eritrea).

“The Sahel is on fire,” journalist Bostjan Videmsek wrote in 2016as he covered an emerging Mali refugee crisis.

But it is not just Africa’s Sahel countries that are in turmoil. Since 2011, 30 coups or attempted coups (hereafter, what I call ‘coup events’) have occurred in 17 countries across the African continent.

In fact, in the post-World War II period, coup events are on the increase in Africa, in contrast to the rest of the world (see Figure 1). Today, on average, Africa witnesses around three coup events per year; whereas, the rest of the world will have about two. Indeed, even if we remove the outlier year of 2013 from the equation, Africa has seen an increasing trend of coup events since 1946.

Figure 1: Trends in Coup/Coup Attempts since 1946

Data source: Wikipedia (supplemented by my own research which is available upon request)

What is going on in Africa that might explain this troubling trend?

We can rule out one of the standard explanations of African political instability in the 1970s and 80s: government debt.

As seen in Figures 2 and 3, the African countries with significant coup events in the past 10 years exist across the entire indebtedness spectrum. While the most indebted country — Sudan, at over 250 percent of GDP — has also been one of the most politically volatile, some of the least indebted countries have likewise seen significant coup events in this period: Burundi, Comoros, Benin, Burkina Faso, Nigeria, Guinea-Bissau, Mali, Chad, and the Central African Republic.

Figure 2: Gov’t Debt to GDP Ratio (%) for African countries (2019/2020)

Source: TradingEconomics.com; Red bars indicate countries with a coup/attempted coup in the last 10 years

Figure 3: Change in Gov’t Debt to GDP Ratio (%) for African countries

Source: TradingEconomics.com; Red bars indicate countries with a coup/attempted coup in the last 10 years

Extreme debt financing burdens to foreign creditors can stunt economic growth and compel governments to divert money from critical social services, leading to increased social instability. But that does not seem to be the major driving force today — at least not the debt part of the equation.

Instead, in the past decade, oil-exporting African countries have endured declining petroleum prices — which have been in a general decline since a $139 peak (for West Texas Intermediate crude) in 2008 — while sub-Saharan African countries have seen their spectacular GDP-per-capita growth of the 2000s start to stagnate (see Figure 4).

Figure 4: GDP per capita growth in Sub-Saharan Africa since 1960

Source: World Bank

In his 1970 book, Why Men Rebel, political scientist Ted Gurr introduced the concept of relative deprivation, which he defined as the discrepancy between what people think they deserve, and what they actually think they can get.

“The potential for collective violence varies strongly with the intensity and scope of relative deprivation among members of a collectivity,” wrote Gurr.

Though Gurr’s thesis since has been significantly modified — with a society’s capacity for political violence being one major addition to the model — it still offers useful insights.

It is well-known that Africa is resource rich (see Figure 5). The African continent accounts for 20 percent of the world’s land mass, 17 percent of its population, and 3 percent of the its GDP — but contains 30 percent of the world’s remaining mineral resources.

While it is inaccurate to assume Africa’s resource wealth is the only reason for the continent’s strong economic growth in the 2000s, it was a major factor in China’s strategic decision to invest heavily there in the past 20 years (see Figure 6).

Figure 5: Mapping Africa’s Natural Resources

Graph by Al Jazeera

Figure 6: U.S. and Chinese Foreign Direct Investment to Africa since 2003

Graph by China-Africa Research Initiative (Johns Hopkins University — SAIS)

Yet, as of late, China’s investment in Africa has waned, which has played a role in Africa’s dampening economic growth.

Taken together, stagnate growth amidst rising expectations caused by the 2000s economic boom period cannot be discounted when explaining Africa’s current political instability.

But such a conclusion neglects the elephant in the room — the negative impact of NATO’s and the Barack Obama administration’s destabilizing of northern Africa — says policy analyst Robert Morris, author of Avoiding The British Empire: What it Was, and How the US can Do Better.

How NATO and the U.S. mucked up Libya (and Syria)

“The destruction of Libya was key to the refugee flows that destabilized the European Union in 2014, and led to the loss of one of its richest countries (United Kingdom) with Brexit in 2016,” contends Morris. “The effects Of Gaddafi’s killing on the Sahel were both immediate, and long lasting. The financial network that had come to underpin the prosperity of much of North Africa disappeared. Gaddafi’s African soldiers dispersed back to their countries and took their weapons with them. Mali was the first to fall.”

It should be reminded also how many of the weapons from the 2011 Libyan revolt found their way to Syria and its civil war. Over 400,000 Syrians have died in Syria’s ongoing civil war and the U.S. remains entrenched in the country’s northeastern sector, presumably to protect Syrian Kurds, but as we know from Trump’s irrepressible candor, the purpose is largely to control much of Syria’s vital oil and gas reserves and thereby control Assad’s ability to rebuild his war-torn nation.

In the final analysis, the West’s weaponizing of the Libyan civil war had a direct and indirect relationship to coup events in Africa’s Sahel, the Syrian civil war, and the refugee crisis in Europe.

Those results alone establish how bad Obama’s Libya policies were during the Arab Spring of 2011.

As unstable as Gaddafi may have been, Libya under his leadership was becoming an economic powerhouse by African standards. Up to 2011, oil money from Libya was spreading throughout Africa.

And then came the Arab Spring of 2011 and, more importantly, the West’s interference in its progress.

“NATO scooped out North Africa’s economic heart and set it on fire,” argues Morris. “After Libya’s destruction, every economy in the Sahel came to a screeching halt.” Before 2011, Libya’s GDP per capita exceeded European Union countries such as Romania and Bulgaria, notes Morris. Today, Libya struggles to reestablish political stability so that it can, once again, become one of Africa’s most prosperous countries.

The “Iraqification” of Africa by Biden and the EU

In the midst of political instability in the Sahel, the European Union (EU) has recently dedicated €8 billion Euros to its European Defense Fund (EDF) that will fund new military weapons and technologies for militaries within the EU and launched the European Peace Facility (EPF) that will authorize the EU for the first time to supply military weapons — along with equipment and training — to non-European military forces around the world.

A reasonable assumption is that significant amounts of these EU-funded weapons will go to military operations in northern Africa, such as in Mali.

The U.S. military, of course, already has a significant presence throughout Africa, including 29 bases and installations (see Figure 7). Ten years ago, that list would have had around 10 bases and installations.

Figure 7: U.S. military bases and operations within Africa (as of 2019)

Source: The Intercept

For the most part, Trump neglected African issues, and by the end of his term was pulling U.S. forces out of countries like Somalia — to the vocal consternation of the Washington, D.C. neoliberal and neoconservative establishment.

Trump’s troop withdrawals are a direct threat to African stability, moaned more than a few media elites and foreign policy analysts, who conveniently ignored the fact that today’s growing instability in Africa correlates to the increased engagement of the U.S. military on the continent. When the U.S. military stood up USAFRICOM at the end of the George W. Bush presidency, one of its first operations under Operation Enduring Freedom (“The Global War on Terror”) was a joint effort by the U.S. and NATO to “stabilize” the Saharan and Sahel regions of Africa at the start of the Libyan civil war in 2011 (U.S. military activities in the Sahel also fall under Operation Juniper Shield).

Instead of stabilizing the region, the U.S. and NATO further weaponized it. There are currently about 40 million guns and light weapons circulating among civilians in Africa, according to one UN report (https://www.un.org/africarenewal/magazine/december-2019-march-2020/silencing-guns-africa-2020). Government entities hold around 11 million guns and light arms, according to that same report.

Since the start of his presidential candidacy, Joe Biden has hinted at a more active U.S. role throughout the world (not just Africa), and if the past is prologue, this will mean deeper U.S. military engagements.

Evidence of his intent came in April when the Biden administration tapped Jeffrey Feltman, a former senior U.S. and United Nations diplomat known for his advocacy of robust U.S. interventionism, to be the U.S. special envoy to the Horn of Africa — an area where Islamist groups remain a palpable threat to regional stability and where growing tensions between Ethiopia and Sudan threaten to further add volatility to the region.

U.S. Secretary of State Antony Blinken highlighted the latter issue when announcing Feltman’s appointment: “Of particular concern are the volatile situation in Ethiopia, including the conflict in Tigray; escalating tension between Ethiopia and Sudan; and the dispute around the Grand Ethiopian Renaissance Dam.”

Where Trump made few promises to African leaders and they responded by not asking for many, Biden has invited the opposite dynamic.

The Biden administration hadn’t moved in yet when former Somali Prime Minister Abdi Yusuf openly pleaded to the Biden administration to recommit to “protecting Somalia” from al-Shabab and other Islamist groups.

Feltman’s appointment and other policy signals from the Biden administration, such as recent State Department allegations of human rights abuses in Ethiopia and designating insurgent groups in Mozambique and the Republic of the Congo as terrorist organizations, shows the U.S. wants once again to be the world’s school hall monitor, or in less snarky terms, “an international voice of conscience.” Consonant with the State Department’s declaration on Mozambique-based terrorist groups, the U.S. military is now actively training Mozambique security forces.

But before assuming only altruistic motives by the U.S., the French oil and gas company Total (TOTF.PA) recently announced it will restart construction of its $20 billion liquefied natural gas project in Mozambique given the improved security situation, having withdrawn its workforce from northern Mozambique in January because of security concerns. Total’s demand for a 25 kilometer secure buffer zone around their project site has been accepted by the Mozambique government, in part due to U.S. security assistance.

These U.S. (and French) policy moves in Mozambique have led some Africa watchers to warn of the “Iraqification” of Africa. In other words, the U.S. is creating an expanding, self-justifying military presence with open-ended mission goals tracked by fuzzy performance metrics.

Africa policy observer Jasmine Opperman says of U.S. and French current policies in Mozambique: “The worst…would be an intervention, direct or not, of the great powers.”

Yet, that is exactly what the Biden and Macron governments are in the process of doing.

There are still reasons for optimism in Africa

But despite the Biden administration’s predictable ramping up of U.S. involvement in Africa’s most intractable military conflicts, many foreign policy analysts, like Morris, remain optimistic for Libya and other African countries.

For starters, after years in which rival groups in Libya asserted their legitimacy as the country’s rightful government, the parliament approved a national unity government on March 10, headed by Prime Minister Abdul Hamid Dabaiba. The transfer of power was peaceful. Something that can never be taken for granted.

Due to Libya’s oil wealth, when it is stable, it is central to all of the Sahel economies, including Tunisia, which still has a functioning (though fragile) democracy. Prior to the Libyan civil war, money sent home from migrant work in Libya was a big part the economy of all Sahel countries. That source of stability may soon return.

“Libya’s population is rich and educated enough, that it’s easy to imagine Tunisia’s experiment spreading there if stability can be preserved,” asserts Morris. “Algeria and Morocco are both relatively rich, fairly well organized places that are very ready for a new system. With a stable Libya, Tunisia could lead a North African block into democracy. This stability wouldn’t just shut down refugee flows, it would provide a new platform for cooperation and economic development, which would, in turn, lead to stability and prosperity for the whole of the Sahel.”

“The Sahel can be a resource for the world’s diplomats and business people again, instead of just a profit center for the French and U.S. militaries,” says Morris. “This isn’t just possible, it’s the most likely result if Libya manages to stabilize.”

Now if only the U.S. and France don’t find a way to screw up this progress.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Neocons may have tipped the balance in the 2020 presidential election

By Kent R. Kroeger (Source: NuQum.com, May 25, 2021)

he Electoral College Results for the 2020 Presidential Election

Unknown at the time, President Donald Trump’s prospects for re-election may have ended on June 20, 2019, when he unilaterally decided — against the wishes of his hawkish foreign policy and military advisors — to abort a planned retaliatory strike against Iran.

Revealing a side of Trump few in the news media acknowledged or believed to exist, the president cited his fear that Iranian civilians would be among the victims of an American airstrike against the Iranian military installations suspected to have been used a day earlier to shoot down a U.S. surveillance drone.

Trump told his stunned advisors, some already monitoring the military operation, that the contrast between the loss of one unmanned drone to the potential loss of dozens of Iranian civilians would be a public relations victory for Tehran, whereas the U.S. military gain would be minimal. Some mocked his scrubbing of the Iran attack as proof that the president was getting his foreign policy advice directly from Fox News’ Tucker Carlson, a persistent critic of the expected U.S. attack on Iran in the days leading up to Trump’s June 20th decision.

Never mind that Trump’s neocon-friendly Middle East policies —such as the exit from the Iran Nuclear Deal and the re-imposition of severe economic sanctions,— were the proximal causes of the June 2019 tensions between the U.S. and Iran, when the moment came to pull the trigger, the president (to his credit) stepped back.

Most prominent members of the GOP’s neocon-wing muted their public criticisms of Trump’s decision, including Liz Cheney (R-WY), who told conservative radio host Hugh Hewitt: “The failure to respond to this kind of direct provocation that we’ve seen now from the Iranians, in particular over the last several weeks, could in fact be a very serious mistake.”

Privately, the failure of Trump to attack Iran was seen by some neocons as further evidence the president was a non-interventionist at heart, some even suggesting, when it comes to using military force, he’s more like Obama than George W. Bush. “There is fear in him (Trump),” scolded New York Times columnist Bret Stephens.

It was bad enough neocons had to bite their collective tongues every time Trump mentioned the need for a U.S. military withdrawal from their pet projects in Syria and Afghanistan. Along with refusing to attack the Iranians over a drone, Trump soon followed up that decision with the suggestion that he was open to new negotiations with the Iranians over their nuclear program. A backbreaking-no-no in neocon world.

If neocons are consistent on anything, it is their religious-like devotion to the belief that diplomacy doesn’t work with America’s foes.

On that point, September 2019 saw the last straw for Trump’s national security advisor John Bolton, who was often more of an advocate for his own policy ideas than a steward for Trump’s. The breaking point for Bolton? A planned (though eventually cancelled) meeting with the Taliban at Camp David. Not only would such talks be “doomed to fail,” according to Bolton, they would be disrespectful to the families of 9/11 victims.

Trump would fire Bolton that same September (or Bolton resigned, depending on who you ask), and barely a week after his exit, Bolton was launching sharp, thinly-veiled attacks on his former boss at a private luncheon at the Gatestone Institute in New York City. The think tank, created in 2008 by Sears Roebuck heiress Nina Rosenwald, focuses on rolling back political Islam and the “Islamization” of non-Muslim societies, and provided the perfect platform for Bolton (who is now on Gatestone’s payroll) to signal to his neocon kinsfolk this one important message: Trump is not one of us and cannot be trusted.

Reinforcing that critique, Trump himself responded to Bolton’s criticisms by reminding everyone that he was not, in fact, a neocon interventionist:

“I was critical of John Bolton for getting us involved with a lot of other people in the Middle East. We’ve spent $7.5 trillion in the Middle East and you ought to ask a lot of people about that. John was not able to work with anybody, and a lot of people disagreed with his ideas. A lot of people were very critical that I brought him on in the first place because of the fact that he was so in favor of going into the Middle East, and he got stuck in quicksand and we became policemen for the Middle East. It’s ridiculous.”

Trump would nonetheless make significant attempts after the Bolton firing to regain support among his neocon constituents.

However, even the impulsive (and arguably counterproductive) assassination of Iran’s top general, Qasem Soleimani, on January 3, 2020, was not enough to allay concern among neocons about Trump’s trustworthiness to stand by his previous neocon policies (i.e., an attempted coup in Venezuela, ongoing support for a Saudi-UAE regime change war in Yemen, the marginalization of Palestinians interests in the Israeli-Palestinian conflict, the ratcheting up of Cold War-like rhetoric against China, etc.).

Where the Bolton Revolt offered evidence that neocon intellectuals were bailing on Trump, perhaps more distressing to the Trump re-election effort were early signals from the GOP’s billionaire donor class that their financial support for Trump 2020 was not to be taken for granted — a group that included Rebekah Mercer, a billionaire heiress who has a history of supporting neocon organizations including Gatestone.

Whether he knew it or not, Trump may have sealed his electoral fate when he cancelled the Iran airstrikes on June 20, 2019.

What is a neoconservative, really?

As with so many of the terms and labels commonly thrown around in the political news media, the term neocon (short for neoconservative) is not as well-defined as one might think.

Political historians generally trace the term back to the 1960s and a group of prominent Democrats — such as Washington Senator Henry “Scoop” Jackson — who were alarmed by their party’s growing pacifism during the Vietnam War. In that period, neocons were politically liberal (i.e., generally support civil liberties and social justice) but much more interventionist and open to the use of military force when it comes to foreign policy.

In the 1970s, however, the term neocon came to represent post-Vietnam Republicans who wanted to move past the debacle of the Nixon presidency (particularly the culture war issues) and return the party’s focus towards promoting a robust U.S. military and muscular foreign policy.

Over time the neocon label has been increasingly applied to politicians from either party (i.e., Hillary Clinton, Dick Cheney) and is generally attached to politicians and foreign policy leaders who believe military force is often a viable option when dealing with our country’s adversaries; and, in some cases, may be the only constructive option.

Neocons are commonly distinguished from foreign policy realists in that the latter typically advocate for the use of military force only when U.S. interests are directly attacked or an important strategic interest is at stake (i.e., fascist control of Europe, threats to world oil supplies); whereas, neocons are more comfortable with the U.S. military being the ‘free world’s police force’ and do not necessarily require a direct threat to U.S. strategic interests (i.e., Syria, Yemen, 2003 Iraq War). Indeed, neocons often justify the use of military force simply to promote regime change within authoritarian nations we don’t like.

These are broad definitions and, as such, can lose their empirical value when they are distilled down to one or two attitudes and encompass a large swath of the American public. Nonetheless, how Americans answered this simple question heading into the 2020 presidential election — How willing should the U.S be to use military force to solve international problems? — may have had a significant relationship to the election outcome.

Are there enough neocon-aligned Americans to change a presidential election outcome?

How many eligible voters in the U.S. are potentially attracted to neocon policy preferences? Are there enough to sway a presidential election? The short answer is: YES.

The 2020 American National Election Study (ANES) conducted by The University of Michigan and Stanford University before and after the 2020 election offers some insight on this question (see Appendix A for a description of this study and Appendix B for a list of key questions from the 2020 ANES). Figure 1 breaks out the number of ANES 2020 respondents according to their “willingness to use force to solve international problems” and the strength of their party identification.

Figure 1: Willingness to Use Force by Party Identification (ANES 2020)

About 15 percent of U.S. eligible voters are “extremely” or “very” willing to use military force to solve international problems (4.7% + 10.6%). That is 36.6 million eligible voters — who I presume are the Americans most open to neocon policy ideas.

Among those 15 percent, 58 percent are strong partisans — which translates to almost 9 percent of the 239 million Americans eligible to vote in 2020 (or about 21.2 million people).

Of those 21.2 million people, 8.7 million are “strong” Democrats and 12.5 million are “strong” Republicans, according to the 2020 ANES.

[Note: 81.3 million people voted for Biden and 74.2 million people voted for Trump in 2020 — a difference of 7.1 million votes.]

Since strong partisans are far more likely to vote than weak partisans, getting these potential voters to the polls is a high priority for any presidential campaign.

But was Trump able to keep the vast majority of strong, neocon-aligned Republicans in 2020?

The start of that answer is in Figure 2 which shows a strong relationship between strength of party identification (in this case, ‘Strong’ Republicans versus ‘Weak’ Republicans, Democrats and Independents), attitudes regarding ‘use of force to solve international problems,’ and 2020 presidential vote choice.

Figure 2: Presidential Vote Choice in 2020 by ‘Willingness to Use Force’ and Strength of Party ID

Data source: ANES 2020 (Data weighted; 7,453 post-election survey respondents)

Seventy-nine percent of ‘strong’ Republicans voted for Trump, while only 3 percent of those voters defected to Democratic candidate Joe Biden. Another 17 percent did not vote and one percent voted for a third party candidate. Conversely, 79 percent of ‘strong’ Democrats voted for Biden, while a mere 2 percent voted for Trump (data not shown in Figure 2). Nineteen percent of ‘strong’ Democrats did not vote and one percent voted for a third party candidate.

The striking finding in Figure 2, however, is the drop off in Trump’s support among ‘strong’ Republicans who are ‘extremely’ or ‘very’ willing to use force to solve international problems (henceforth, referred to as Republican neocons). Only 59 percent of Republican neocons voted for Trump, compared to 81 percent of other ‘strong’ Republicans.

This is an astonishing decline in support among what should be loyal Republican partisans. In numbers, this means 883,000 Republican neocons abandoned their party’s incumbent president in the 2020 general election.

Is that enough to change the outcome an election in which Biden beat Trump by 7.1 million votes and 74 electoral votes?

If we add those 883,000 “lost” Trump votes across the states (plus D.C.) in proportion to each state’s vote eligible population (and assume all of them were originally non-voters and, therefore, we do not subtract them from Biden’s state vote totals), Trump would win two states he had originally lost: Arizona and Georgia — which, together, are worth 27 electoral votes.

[The spreadsheet where I conducted this analysis is available on GitHub.]

Trump would have still lost the election by 20 electoral votes had the Republican neocons voted as loyally for Trump as other “strong” Republicans.

For Trump to win, he would still need to close a 45,000 vote gap in Pennsylvania and a 4,600 vote gap in Wisconsin.

But what if there are there other Republican neocon voters we did not consider in the previous analysis? What if someone is ‘moderately’ willing to use force to solve international problems? Could they not also be considered a neocon-aligned voter?

The following statistical analysis attempts to answer that question…

A Vote Model of the 2020 Presidential Election

Using the post-election respondents from the ANES 2020 (a weighted total of 7,453 eligible voters), I estimated a multinomial logistic model for the 2020 presidential vote using a 3-level vote indicator (Trump, Biden, Non-voter/Third party voter) as the dependent variable.

The statistically significant independent variables in the model included:

  • Index of Partisan Policy Preferences (see Appendix C for a list of standardized survey items used to construct the index)
  • Demographics: Household Income, Age, Sex, Education, Race/Ethnicity
  • Party Identification and Strength of Party Identification
  • Ideological Self-Placement (Liberal — Moderate — Conservative)
  • Approval/Disapproval of President Trump’s Handling of COVID-19
  • Perception of Changes in Income Gap
  • Perceived Likelihood of Russian Interference in 2020 Election
  • Perception that Government is Run by a Few Big Interests
  • Willingness to Use Force to Solve International Problems

For the purposes of this essay, I will concentrate on the findings related to a respondent’s willingness to use force to solve international problems — which I consider an indicator of a respondent’s openness to neocon policy ideas.

[The full model estimates and diagnostics are in Appendix D and the dataset used to generate these results are available on GitHub.]

Results

Overall, a respondent’s willingness to use force was a significant predictor of a Trump vote in 2020 (Chi-square = 9.5, p < .002). More specifically, the more willing a respondent said they were to use force to solve international problems, the less likely they were to vote for Trump (see Appendix B for the variable’s coding scheme).

After controlling for a number of important factors commonly associated with a person’s vote choice (e.g., age, education, income, party ID, policy preferences), attitudes on Use of Force was still significant in the margins. But enough to alter the 2020 election outcome?

To estimate the electoral impact of attitudes on Use of Force, I first identified those respondents where the model predicted a Trump voter when, in fact, the respondent voted for Biden. Next, I further filtered down to those respondents who answered “extremely,” “very,” or “moderately” willing to use force to solve international problems — in other words, neocons who we would have expected to vote for Trump, but voted for Biden instead.

In the ANES 2020, they are represented by a scant 45 respondents (or 0.6 percent of the total sample). But that translates to 1.43 million voters in the 2020 election.

Similar to the first analysis, I added those 1.43 million “lost” Trump votes across the states (plus D.C.) in proportion to each state’s vote eligible population (but this time also subtracting them from Biden’s state vote totals). After doing this, the adjusted state-level vote totals end up flipping not just Arizona and Georgia (as in the first analysis), but also Pennsylvania and Wisconsin. Those 57 electoral votes would have given Trump 289 electoral votes to Biden’s 249.

If Trump had kept the Republican neocons in his camp, he would have won the 2020 election.

[The details behind this second analysis can also be found on GitHub.]

Final Thoughts

Obviously, this is all very speculative. For example, there are other attitudinal variables (expectations of Russian interference, perceived changes in income inequality, Trump’s handling of COVID-19) that were significant predictors of the 2020 vote choice and could have impacted the election outcome.

I just as easily could have shown evidence that disapproval of Trump’s handling of the COVID-19 pandemic impacted enough votes to give Biden the victory.

Or expectations of Russian election interference. Or the growing income gap. Or the belief that Washington, D.C. is controlled by a few big money interests.

All were significant factors in the 2020 presidential election, in isolation or considered together.

If anything, this analysis reminds us that the 2020 election was closer than we may remember. Four states. That was the difference.

Trump still could have won, even with an unprecedented level of media hostility and a worldwide pandemic erupting on his watch.

And, so, it does not seem laughable or implausible that the 2020 outcome was changed by a relatively large, motivated group of neocon Republicans — an identifiable faction openly upset about a number of Trump foreign policy decisions (or threatened decisions).

I believe they did.

  • K.R.K.

Send comments to: nuqum@protonmail.com

APPENDIX A: The American National Election Study (2020)

The American National Election Study 2020

Data collection for the ANES 2020 Time Series Study pre-election interviews began in August 2020 and continued until Election Day, Tuesday (November 3). Post-election interviews began soon after the election and continued through the end of December. This field period began earlier than the traditional ANES field period, which typically starts the day after Labor Day and concludes the day before Election Day.

The ANES 2020 Time Series Study features a fresh cross-sectional sample, with respondents randomly assigned to one of three sequential mode groups: web only, mixed web (i.e., web and phone), and mixed video (i.e., video, web, and phone). The study also features re-interviews with 2016 ANES respondents (conducted by web), and post-election surveys with respondents from the General Social Survey (GSS).

Figure A.1: Summary of ANES 2020 Survey Methodology

Source: ANES 2020 Time Series Study Preliminary Release: March 24, 2021 version. www.electionstudies.org

APPENDIX B: Survey Items Best for Defining Respondent Policy Preferences (Principal Component Analysis)

APPENDIX B: Coding of Key Questions from 2020 ANES

APPENDIX C: Survey Items Best for Defining Respondent Policy Preferences (Principal Component Analysis)

APPENDIX D: Detailed Output for Multinomial Logistic Model for the Presidential Vote in 2020

Freedom of speech is messy, which is why defending it is so important

By Kent R. Kroeger (Source: NuQum.com, May 15, 2021)

A family member attached this May 13th New York Times article —Activists and Ex-Spy Said to Have Plotted to Discredit Trump ‘Enemies’ in Government — to a spirited email that started: Do you still defend Project Veritas?!

It took me the 15 minutes required to read the article to do just that — defend Project Veritas.

First off, however, I don’t think I’ve ever “defended” Project Veritas in the past. To the contrary, I do not care for the Mike Wallace/60 Minutes-pioneered form of ‘hidden camera’ journalism that Project Veritas has heavily relied upon in its news-gathering activities. While it makes for good television and internet click bait, the technique is easily abused, especially when it captures private comments out of context. And Project Veritas’ use of ‘honey pots’ to entrap their targets is downright unethical.

All the same, I cannot recall a single instance where Project Veritas and its founder, James O’Keefe, have ever had to retract a news story they’ve published. The New York Times only wishes it could say the same.

I must also confess I find it exhilarating when powerful people (particularly in the news media) are forced to reconcile their private statements with their public facade of journalistic objectivity. Project Veritas’ exposure of CNN as the propaganda arm of the Democratic Party is priceless — and entirely accurate.

Still, I take seriously the question as to whether Project Veritas’ professionalization of its ‘gotcha’ news-gathering approach is socially constructive — especially when the organization employs intelligence experts (‘spies’) and their sophisticated spycraft.

We should start that answer with a brief summary of the recent New York Times story (via The Hill):

“A conservative activist group, helped by a former British spy, secretly surveilled government employees during the Trump administration with the goal of discrediting perceived enemies of former President Trump…Project Veritas — with aid from a former British spy and Erik Prince, the founder of Blackwater — was part of a campaign that involved surveillance operations against members of the FBI.

The overall effort, the Times wrote, also included a plan for a sting operation against Trump’s former national security adviser H.R. McMaster that involved some Veritas staffers, though Veritas itself has denied any involvement with that plot. Both, the Times alleged, were intended to reveal anti-Trump sentiments.”

If echoes from the Nixon administration using government resources to spy on his enemies come to mind, you are not alone. There is nothing more frightening to people in my age cohort than a sitting U.S. president using his or her immense powers to investigate and discredit enemies.

But the comparisons of Project Veritas’ activities to Nixon’s diverge quickly with the details offered by the Times story. According to that story, there is no evidence Trump’s White House authorized or coordinated Project Veritas’ efforts to investigate the ‘loyalty’ of key members of Trump’s foreign policy circle.

The Hatch Act makes it illegal for public officials, such as White House staffers, to use their time or government resources to pursue their own private interests or someone else’s (such as a U.S. president’s). That prohibition includes explicitly partisan political activities.

[Admittedly, Donald Trump pushed the envelope on those restrictions on at least one occasion. And, in my opinion, Trump’s personal actions with respect to investigating Hunter Biden’s questionable activities in Ukraine constituted a serious breach of ethical, if not legal, behavior for a president. However, given that the U.S. news media rarely engages in credible, non-partisan investigative journalism anymore, it is hard to judge Trump too harshly.]

And while a U.S. Senate staffer (Barbara Ledeen) was implicated in the Times story about Project Veritas’ alleged activities to “expose the deep state’s disloyalty” to the Trump administration, it is not clear that she had engaged in any activities outside legal bounds. In fact, congressional committees possess exceptional latitude through their oversight powers to investigate executive actions and personnel.

Think about the Times story from this perspective: Is it newsworthy if a senior member of a presidential administration is actively working behind the scenes against the president’s policies?

Of course it is. To suggest otherwise is intellectually dishonest.

In the final analysis, watching the powerful eating their own is the least of my worries, except when such activities potentially compromise the privacy or liberties of all Americans. Targeting the powerful with sophisticated intelligence gathering tools is one thing, but should those capabilities be turned against average citizens, that is worrisome.

In that sense, I am not a fan of Project Veritas’ increasingly sophisticated and deceptive news-gathering methods, even as I will passionately defend their right to do what they do — provided they don’t violate criminal law in their pursuit of such information.

Our Constitution’s First Amendment extends to all citizens — not just journalists — broad and inviolable rights to investigate, report on and judge the activities of the political class. To impose unnecessary limits on those rights is a direct threat to all of our freedoms.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Beadle (the Data Crunching Robot) Predicts the NFL Playoffs

By Kent R. Kroeger (Source: NuQum.com; January 15, 2021)

Beadle (the Data Crunching Robot); Photo by Hello Robotics (Used under the Creative Commons Attribution-Share Alike 4.0 International license)

Since we are a mere 24 hours away from the start of the NFL Divisional Round playoffs, I will dispense with any long-winded explanation of how my data loving robot (Beadle) came up with her predictions for those games.

Suffice it to say, despite her Bayesian roots, Beadle is rather lazy statistician who typically eschews the rigors and challenges associated with building statistical models from scratch for the convenience of cribbing off the work of others.

Why do all that work when you can have others do it for you?

There is no better arena to award Beadle’s sluggardness than predicting NFL football games, as there are literally hundreds of statisticians, data modelers and highly-motivated gamblers who publicly share their methodologies and resultant game predictions for all to see.

Why reinvent the wheel?

With this frame-of-mind, Beadle has all season long been scanning the Web for these game predictions and quietly noting those data analysts with the best prediction track records. Oh, heck, who am I kidding? Beadle stopped doing that about four weeks into the season.

What was the point? It was obvious from the beginning that all, not most, but ALL of these prediction models use mostly the same variables and statistical modeling techniques and, voilà, come up with mostly the same predictions.

FiveThirtyEight’s prediction model predicted back in September that the Kansas City Chiefs would win this year’s Super Bowl over the New Orleans Saints. And so did about 538 other prediction models.

Why? Because they are all using the same data inputs and whatever variation in methods they employ to crunch that data (e.g., Bayesians versus Frequentists) is not different enough to substantively change model predictions.

But what if the Chiefs are that good? Shouldn’t the models reflect that reality?

And it can never be forgotten that these NFL prediction models face a highly dynamic environment where quarterbacks and other key players can get injured over the course of a season, fundamentally changing a team prospects — a fact FiveThirtyEight’s model accounts for with respect to QBs — and the reason for which preseason model predictions (and Vegas betting lines) need to be updated from week-to-week.

Beadle and I are not negative towards statistical prediction models. To the contrary, given the infinitely complex contexts in which they are asked to make judgments, we couldn’t be more in awe of the fact that many of them are very predictive.

Before I share Beadle’s predictions for the NFL Divisional Round, I should extend thanks to these eight analytic websites that shared their data and methodologies: teamrankings.com, ESPN’s Football Power Index, sagarin.com, masseyratings.com, thepowerrank.com, ff-winners.com, powerrankingsguru.com, and simmonsratings.com.

It is from these prediction models that Beadle aggregated their NFL team scores to generate her own game predictions.

Beadle’s Predictions for the NFL Divisional Playoffs

Without any further adieu, here is how Beadle ranks the remaining NFL playoff teams on her Average Power Index (API), which is merely each team’s standardized (z-score) after averaging the index scores for the eight prediction models:

Analysis by Kent R. Kroeger (NuQum.com)

And from those API values, Beadle makes the following game predictions (including point spreads and scores) through the Super Bowl:

No surprise: Beadle predicts the Kansas City Chiefs will win the Super Bowl in a close game with the New Orleans Saints.

But you didn’t need Beadle to tell you that. FiveThirtyEight.com made that similar prediction five months ago.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The data do not support the Miami Dolphins bailing on Tua Tagovailoa

[Headline photo: Two cheerleaders for the Miami Dolphins football team (Photo by Jonathan Skaines; Used under the CCA-Share Alike 2.0 Generic license.)]

First, an apology to my wife. The above photo was the one of the few Miami Dolphin-related public copyright photos I could find on short notice. It should not be regarded, however, as an endorsement of fake smiles.

Now, to the issue at hand…

Alabama’s Tua Tagovailoa was the fifth overall pick and second quarterback taken in the 2020 National Football League (NFL) draft.

Drafted by the Miami Dolphins, Tagovailoa was drafted behind Heisman winner Joe Burrow (QB — Cincinnati Bengals) and Ohio State’s Chase Young (DE — Washington Sea Dogs) and was one of four quarterbacks selected in the first round. San Diego took the third quarterback, Oregon’s Justin Herbert, as the sixth overall pick and Green Bay— mysteriously — thought Utah State’s Jordan Love, the 26th overall pick and fourth quarterback taken, was that final piece needed for the Aaron Rodgers-led Packers to win another Super Bowl (…and, even more mysteriously, Love’s clipboard-holding skills seem to be what the Cheeseheads needed this season).

Normally when an NFL team drafts a quarterback as high as fifth, they give him at least a few years to earn his first round contract. The Tampa Bay Buccaneers gave first overall pick Jameis Winston five years, as did the Tennessee Titans with second overall pick Marcus Mariota. Sam Bradford and Mark Sanchez were offered four years to prove their value to their respective teams, the St. Louis Rams and New York Jets. The oft-injured Robert Griffin III — the Washington Federals’ second pick in the 2012 draft —had three years. Even purple drank jugging rumors didn’t stop JaMarcus Russell from getting two solid years of opportunity from the Oakland Raiders.

And, keep in mind, the Dolphins’ 2012 first round pick — and current Titans quarterback — Ryan Tannehill gave the team six mediocre seasons before they jettisoned him in 2019. The Dolphins were patient with Tannehill — who has turned into a high-quality quarterback — so why not with Tagovailoa?

While being impatient with his head coaches, having had six since buying the team in 2008, Dolphins owner Stephen M. Ross, who famously said “there’s a lot of good and I believe there’s a lot of bad” regarding his friend President Donald Trump, has a low-profile personality and is not known for creating drama.

Yet, if he allows his football team’s brain trust to draft another quarterback in the first round, he will get more than drama, he will completely undercut the already fragile confidence of his current starter in Tagovailoa.

So why are a significant number of NFL draft experts seriously recommending the Dolphins use their third pick in the 2021 draft on another quarterback? Writing for ESPN, three out of seven experts said the Dolphins should use their pick on another quarterback:

Jeremy Fowler, national NFL writer: Quarterback. Key word is “address.” Miami needs to thoroughly evaluate the top quarterbacks in the draft, then weigh the pros and cons of not taking one and sticking with Tagovailoa as the unquestioned starter. Miami owes it to its fans and organization to at least do that. This is the one position where a surplus isn’t a bad thing. Keep drafting passers high if necessary. Tua might be the guy regardless. And if the Dolphins decide he’s better than Zach Wilson or Justin Fields or Trey Lance, then grab the offensive tackle or playmaking receiver Miami needs around him.

Mike Clay, fantasy football writer:: Quarterback. You don’t have to agree with me on this, but I’ve always been in the camp of “If you’re not sure you have a franchise quarterback, you don’t have a franchise quarterback.” From my perspective, we don’t know whether Tua Tagovailoa is the answer, as he didn’t look the part and was benched multiple times as a rookie. Miami’s future looks bright after a 10-win season in Brian Flores’ second campaign, so it’s unlikely this franchise will be picking in the top five again anytime soon. If they aren’t convinced Tua is the franchise quarterback, they need to avoid sunk-cost fallacy and a trip to long-term quarterback purgatory.

Seth Walder: Quarterback. Tagovailoa still might pan out, but quarterback is too important for Miami to put all of its eggs in that basket, especially after he finished 26th in QBR and clearly did not earn complete trust from the coaching staff. Take a shot at whichever of the top three quarterbacks is left on the board while keeping Tagovailoa, at least for now. That way, Miami can maximize its chances of finding its franchise QB.

And the question must be asked, why? Has Tagovailoa grossly under-performed? If Miami drafts another quarterback just a year after getting Tagovailoa, the only conclusion one can make is that the Dolphins consider him a bust, but with only a year under his belt is that even possible to know?

Before assessing Tagovailoa’s performance in his rookie season, we should consider the possible comparisons. The first comparison is the most obvious: compare Tagovailoa to other quarterback’s first significant playing year (which I define as a quarterback’s first year with at least three starts and 50 or more pass attempts — admittedly, this is a low threshold).

Also, for comparability sake, I’ve decided here to only compare quarterbacks drafted in the first round since 2005, the year in which www.pro-football-reference.com starts computing ESPN’s Total QBR Index (QBR) for quarterbacks. While other quarterback metrics have been posited as better measures of quarterback quality — passer rating, adjusted net yards per pass attempt — none are perfect as they don’t directly account for the style of a team’s offense, the quality of a team’s personnel, and the quality of the defense, all of which play a significant role in how a quarterback plays. In the end, I went with the statistic that best predicts wins: ESPN’S QBR.

[I should add that while the QBR does not consider the strength-of-schedule (SoS) faced by a quarterback, it is easily computed and nicely demonstrated in a past analysis by Chase Stuart on footballperspective.com. In a follow-up to this essay, I will incorporate SoS information into player performance metrics for the 2020 season.]

The second comparison is Tagovailoa’s from game-to-game. Did he improve? And the final comparison is the value of the QBR itself. By design, ESPN’s QBR is an approximate objective standard by which to judge quarterbacks: QBR’s exceeding 50 represent above-average quarterbacks when compared to all quarterbacks since 2006.

I will dispense with the last comparison first: Tagovailoa’s rookie year QBR, based on nine starts, 290 pass attempts, a 64.1 percent completion rate and 11 touchdown passes against five interceptions is an above-average 52.9 (which puts him at 26th out of 35 quarterbacks for whom the QBR was computed).

Well, on this comparison at least, Tagovailoa does not stand out in a positive way. But perhaps his performance improved over the season? Hard to say. His first start in Week 8 against the Los Angeles Rams — the NFL’s best passing defense — led to a 29.3 QBR, and over his next eight starts he achieved QBRs over 60 against the Arizona Cardinals (Week 9, QBR 87.3), the Los Angeles Chargers (Week 10, QBR 66.5), the Cincinnati Bengals (Week 13, QBR 74.5) and the Las Vegas Raiders (Week 16, QBR 64.4). Conversely, he struggled against the Denver Broncos (Week 11, QBR 22.9), the Kansas City Chiefs (Week 14, QBR 30.2). and the Buffalo Bills (Week 17, QBR 23.3) — all good passing defenses.

After these first two comparisons, it is hard to decide if Tagovailoa is going to be Miami’s franchise quarterback for the future. As with almost any rookie quarterback, there are positives and negatives, and neither overwhelms the other in Tagovailoa’s case.

However, in our final comparison, I believe Tagovailoa has more than proven it is far too soon for the Dolphins to spend a Top 3 draft choice on another quarterback.

First, we should look at the season-to-season QBRs of quarterbacks who are arguably “franchise” quarterbacks and who were picked in the first round (see Figure 1 below). And if you don’t consider Kyler Murray, Ryan Tannehill, Baker Mayfield or Jared Goff franchise quarterbacks, check in with me in a couple of years. All four are currently in a good, mid-career trajectory by historical standards.

Figure 1: Season-to-Season QBRs for NFL “franchise” Quarterbacks Selected in the 1st Round since 2005

Image for post
Data Source: www.pro-football-reference.com

Three things jump out to me from Figure 1: (1) Franchise quarterbacks rarely have seasons with dismal overall QBRs (<40), (2) Aaron Rodgers really is that great, and (3) Patrick Mahomes, still early in his career, is already in the QBR stratosphere (…and he almost has nowhere to go but down).

How does Tagovailoa compare to my selection of franchise quarterbacks and non-franchise quarterbacks, as well as the other quarterbacks in the 2020 first round draft class (Joe Burrow and Justin Herbert)? As it turns out, pretty good (see Figure 2).

As for the non-franchise quarterbacks, my most controversial assignments are Cam Newton and Joe Flacco. I’m welcome to counter-arguments, but their inclusion in either group does not change the basic conclusion from Figure 2 with respect to Tagovailoa.

Figure 2: Season-to-Season QBRs for NFL “franchise” & “non-franchise” Quarterbacks Selected in the 1st Round since 2005

Image for post
Data Source: www.pro-football-reference.com

In comparison to the other quarterbacks and their first substantive year in the NFL, Tagovailoa’s 2020 QBR is slightly below the average for franchise quarterbacks (52.9 versus 54.6, respectively), and is significantly higher than for non-franchise quarterbacks (52.9 versus 46.1, respectively).

Among his 2020 draft peers, Tagovailoa’s QBR is comparable to Burrow’s (who missed six games due to a season-ending injury), but a far cry from Herbert’s (QBR = 69.7), who is already showing clear signs of super stardom ahead.

Experts are happy to debate whether Tagovailoa has the ability to “throw guys open,” or whether the level of receiver talent he had at Alabama masked his deficiencies. He may well never be a franchise quarterback by any common understanding of the category.

But given his performance in his rookie campaign and how it compares to other quarterbacks, it is unfathomable to me that the Dolphins could entertain even the slightest thought of drafting a quarterback in the 2021 draft. I hope they are not and it is merely some ESPN talking heads with that wild hair up their asses.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Why opinion journalists are sometimes bad at their job (including myself)

[Headline graphic by Dan Murrell; Data source: RottenTomatoes.com]

By Kent R. Kroeger (Source: NuQum.com; January 4, 2020)

Opinion journalists, such as movie critics, bring biases to every opinion they hold and complete objectivity is an ideal few, if any, attain.

The scientific literature on this trait common to all humans, not just opinion journalists, is vast and well-established. The lenses through which we interact with the world are multilayered and varied, each of us with our own unique configuration.

The science tells us we tend to overestimate our own knowledge while underestimating the knowledge of others (“Lake Wobegon effect“); we tend to believe an idea that has been repeated to us multiple times or is easy to understand, regardless of its actual veracity (“illusory truth effect“); we overestimate the importance recent information over historic information (“recency effect“); we offer our opinions to others that will be viewed more favorably by them and often suppress our unpopular opinions (“social desirability bias“); and perhaps the most dangerous bias of all: confirmation bias — our inclination to search for, process and remember information that confirms our preconceptions to the exclusion of information that might challenge them.

But nowhere are  human biases more socially destructive than when opinion journalists project onto others the motivations for their personal opinions and actions. It is often called the illusion of transparency and it occurs when we overestimate our own ability to understand what drives someone else’s opinions and behaviors. [The other side of that same bias occurs when we overestimate the ability of others to know our own motivations.]

The illusion of transparency often leads to fundamental attribution errors in which the explanations for the opinions and behaviors of others is falsely reduced to psychological and personality-based factors (“racist,” “sexist,” “lazy,” “stupid,” etc.).

In combination with intergroup bias — which takes the illusion of transparency to the group level and causes members of a group to give preferential treatment to their own group, often leading to a group’s intellectual atrophy as they make it difficult for new ideas to be introduced into the group — this tendency to falsely infer the motives of others can create systematic, group-level misunderstandings, leading potentially to violent social conflicts.

Judge not, that ye be not judged (Matthew 7:1-3 KJV)

I know something of these biases as I engage in when I write, including in my last opinion essay about the unusual proportion of male movie critics that gave Wonder Woman 1984 (WW84) a positive review (“Are movie critics journalists?“). Though having never met one of these male movie critics, I still felt comfortable attributing their positive reviews to WW84 as a product of being handpicked by WW84’s movie studio (Warner Bros.) for early access to the movie, along with their desire to “please their editors and audience” (a presumed manifestation of the social desirability bias) and other career motives.

Was I right? I offered little evidence beyond mere conjecture as to why the few early negative reviews for WW84 came almost entirely from female movie critics (I basically said liberal men are “useless cowards“). For that I am regretful. I can do better.

Yet, I still believe there was a clear bias among some movie critics in favor of WW84 for reasons unrelated to the actual quality of the movie. How is it possible that, out of the 19 male movie critics in Rotten Tomatoes’ “Top Critics” list who reviewed WW84 in the first two days of its Dec. 15th pre-release, not one gave WW84 a bad review. Not one.

If we assume the reviews were independent of one another and that the actual quality of WW84 warranted 80 percent positive reviews (an assumption purely for argument’s sake), then the probability that we’d get 19 consecutive positive reviews from the top male movie critics is a mere 1.4 percent ( = 0.8^19). If we use WW84’s current Rotten Tomatoes score among all critics of 60 percent as our assumption, that probability goes to near zero.

I can only draw one conclusion: Early reviews by the top male critics were excessively positive for WW84.

As to why this happened, be my guest with your own theories and hypotheses. Do I think Warner Bros. paid for good WW84 reviews? That is the typical straw man argument Hollywood journalists like to use to discredit critics of entertainment journalism. I have no evidence of money changing hands between Warner Bros. and selected movie critics and  I have never suggested as much.

Do I think editors, peer pressure, and even the general public mood weigh heavily on movie critic reviews? Absolutely, yes, and scientific evidence in other social contexts suggest this is likely the case.

Which is why when I read other journalists and movie critics suggest that negative WW84 reviews are motivated by deep-rooted sexism, I cry, “Foul!”

No, critics of “Wonder Woman 1984” are not sexist

In a recent article for Forbes, movie critics and screenwriter Mark Hughes concludes that much of the criticism of WW84, especially from male critics, is motivated by nothing less than sexism. He writes:

Questions of the film’s tone and action sequences are frankly of little interest to me, since most of the same folks offering up those complaints were eager to praise the silliness of many other superhero films. One day it’s “these films take themselves too seriously,” and the next it’s “this film is silly and should take itself more seriously.” Wash, rinse, repeat as necessary (or as clicks and payday necessitate).

Likewise, when men helm films we see far more willingness to weigh “that which works” as more important than “that which doesn’t work,” and allow them room to come back later and impress us. A woman, though? Not so much, as Patty Jenkins has been personally insulted and condemned by voices declaring Wonder Woman 1984 an inexcusable offense to humanity. If you think I’m being hyperbolic about the accusations hurled against the film and its defenders, go look around social media and press coverage for 30 seconds, and then come back to finish this article…

In other words, according to Hughes, we don’t have to be conscious of our deeply ingrained, latent sexism to be subject to its power. Merely disliking a movie directed by a woman proves its existence.

Let me start by noting that many of the male (and female) movie critics that did not like WW84, gave glowing reviews for director Patty Jenkins’ first Wonder Woman movie in 2017.  Chris Stuckmann is as good an example as any in the flaw of Hughes’ sexism charge: Stuckmann’s 2017 Wonder Woman review. His WW84 review.

Did Stuckmann’s latent sexism only kick-in after 2017? Of course not. The more likely explanation is that Stuckmann realizes Wonder Woman (2017) is a very good movie and WW84 is not.

But since Hughes is carelessly willing to suggest critics like Stuckmann are driven by subconscious sexist tendencies when they review movies by female directors, let me conjecture that Hughes had a much more powerful motivation for giving WW84 a good review.

Hughes is a screenwriter (as well as being a movie critic) and one of the well-known attributes of Hollywood culture is that directors, writers, and actors do not publicly like to piss on someone else’s work. It can be career suicide, particularly when that person directed one of the best movies of 2017 (Wonder Woman) and is widely admired within the industry. Even if sexism is alive and well in Hollywood (and I have no doubt that it is), by virtue alone of having helmed two great movies in her young career — Monster (2003) and Wonder Woman (2017) — Jenkins possesses real power by any Hollywood standard.

That Hughes liked WW84 is not surprising. I would have been stunned if Hughes hadn’t.

My complaint about Hughes’ recent Forbes article chastising the “harsher” critics of WW84 is not that Hughes thought WW84 was a good film. That Hughes appreciated the positive themes in WW84 enough to overlook the movie’s obvious flaws is truly OK. [My family, myself notwithstanding, loved the movie.] I’ve loved many movies that, objectively, were rather bad (Nicolas Cage in The Wicker Man comes to mind).

My problem with Hughes (and, unfortunately, far too many writers and journalists at present) is that he throws around psychological theories and personal accusations without a shred of empirical evidence.

Hughes doesn’t know the motivations for why someone writes a critical review any more than I do.

But Hughes takes it one step farther. He implies there’s a dark, antisocial aspect to someone who doesn’t like WW84. He asks: “Do you look at the world around you and decide we need LESS storytelling that appeals to our idealism and posits a world in which grace and mercy are transformative, in which people can look at the truth and make a choice in that moment to try to be better?”

No, Mr. Hughes, I do not think we need LESS storytelling that appeals to our idealism and better angels. But I believe we need MORE GOOD storytelling that does that. Unfortunately,  in my opinion, WW84  does not meet that standard. Furthermore, when Hollywood and our entertainment industry does it poorly, I fear it risks generating higher levels of cynicism towards the very ideals you (and I) endorse.

As one of my government bosses once said as he scolded me, “Kent, good intentions don’t matter. I want results.”

I think that dictum applies to Hollywood movies too.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

Are movie critics journalists? Reviews for “Wonder Woman 1984” suggest many are not.

[Headline photo: Gal Gadot speaking at the 2016 San Diego Comic Con International, in San Diego, California (Photo by Gage Skidmore; used under CCA-Share Alike 2.0 Generic license.]

By Kent R. Kroeger (Source: NuQum.com; December 31, 2020)

A friend of mine from graduate school — whose opinions I trusted, particularly when it came to movies and popular culture (for example, he introduced me to South Park)— shocked me one day when he told me he hated The Godfather.

How can someone who loves movies hate The Godfather?! How could someone so well-informed — he is today a recognized expert in the role and social importance of myth-making —be so utterly wrong?

The answer is quite simple: Danny prided himself on being a critic and he had a genuine problem with The Godfather, particularly the acting and dramatic pacing. [A similarly harsh critique of The Godfather was written in 1972 by Stanley Kauffmann of The New Republic.]

The reality is, thoughtful people can have dramatic differences in opinion, especially when it comes to things as subjective as movies and entertainment. [I love Monty Python and my Stanford PhD wife thinks they are moronic. Both opinions can be correct.]

Still, I’m convinced if you put 100 well-educated movie critics in a room to discuss The Godfather, 95 of them would say the movie is an American classic, and most would probably put one or both of the first two Godfather movies in the Top 20 of all time. The ‘wisdom of the crowd’ represents something real and cannot be ignored.

At the same time, those five Godfather-dismissing critics are no less real and their opinions are no less meritorious — assuming they aren’t pursuing an agenda unrelated to judging the quality of The Godfather.

But that is the problem I fear contaminates too many movie reviews today. Movie critics, by training and platform, are ‘opinion journalists.’ As such, they filter their opinions through a desire to please (impress) an immediate social circle (and bosses), as well as an influence from the mood of the times. We all do that, as it is only human.

But good journalists, including movie critics, fight that tendency — or, at least, I believe they should make the attempt.

In the case of movie criticism, to not do so risks compromising the value of the critiques. At best, it renders the criticism worthless, and at worst, malevolent.

It is fair to ask at this point, what the hell am I talking about?

I am not going to review Wonder Woman 1984 (WW84) here. I enjoy reading movie criticisms, but I don’t enjoy writing them. However, if I did review WW84, it might sound something like this review by Alteori:

As much as I thought Gal Gadot raised her game in WW84, I didn’t think anybody else did. But my overall reaction to WW84 was driven, in part, by what I did before I even saw the movie.

My first mistake (besides grudgingly subscribing to HBOMax — whose horrible, wretched parent company I once worked at for a short time) was to read one of the embargo-period reviews. Those are reviews from movie critics pre-selected by Warner Bros. to see the film prior to a wider release.

Normally, for movies I am excited to see, I avoid the corporate hype and eschew the early reviews. I want my opinion to be uncorrupted by other opinions. WW84 was one of those movies because, as this blog can attest, I am a huge fan of the first Wonder Woman movie (2017), particularly Gal Gadot’s portrayal of the superhero and the way director Patty Jenkins and screenwriter Allan Heinberg avoided turning the film into a platform for some watered-down, partisan political agenda. Wonder Woman (2017) was a film made for everyone.

[By the way, as a complete digression, I don’t care when people mispronounce names, especially when it is a name outside the someone’s native language. But I don’t understand why still 9-out-of-10 movie reviewers pronounce Gal Gadot’s name wrong. It couldn’t be simpler. It is Gal (as in guys and gals) and Guh-dote (as in, ‘my grandmother dotes on me’). Here is Gal to help you with the pronunciation.]

But, for reasons unknown, I decided to read one “Top Critic” review of WW84 before seeing the film myself. I will not reveal the reviewer’s name; yet, after seeing WW84, I have no idea what movie that person saw because it wasn’t the WW84 I saw.

This is the gist of that early review (for which I paraphrase in order to protect the identity and reputation of that clearly conflicted reviewer):

Wonder Woman 1984 is the movie we’ve all been waiting for!

If I had only read the review more closely, I would have seen the red flags. Words and phrases like “largely empty spectacle,” “narratively unwieldy,” “overwrought,” “overdrawn,” and “self-indulgent” were sprinkled throughout, if only I had been open to those hints.

In fact, after reading nearly one hundred WW84 reviews in the last two weeks, I see now that movie critics will often leave a series of breadcrumb clues indicating what they really thought of the movie. At the office they may be shills for the powerful movie industry, but similar to Galen Erso’s design of the Death Star, they will plant the seed of destruction for even the most hyped Hollywood movie. In other words, they may sell their souls to keep their jobs, but they still know a crappy movie when they see one.

Maybe ‘crappy’ is too strong, but WW84 was not a good movie — not by any objective measure that I can imagine. Don’t take my word for it. Read just a few of the reviews on RottenTomatoes.com by movie critics that still put their professional integrity ahead of their party schedule: Hannah WoodheadFionnuala HalliganAngelica Jade Bastién, and Stephanie Zacharek,

It’s not a coincidence that these movie critics are all women. It is clear to me that they have been gifted a special superpower which allows them to see through Hollywood’s faux-wokeness sh*t factory. That male movie critics are too afraid to see it, much less call it out, is further proof that one of the byproducts of the #MeToo movement is that liberal men are increasingly useless in our society. They can’t even review a goddamn movie with any credibility. Why are we keeping them around? What role do they serve?

Alright. Now I’ve gone too far. The vodka martinis are kicking in. I’m going to stop before I type something that generates the FBI’s attention.

I’ll end with this: I still love Gal Gadot and if WW84 had more of her and less of everyone else in the movie, I would have enjoyed the movie more. Hell, if they filmed Gal Gadot eating a Cobb salad for two-and-a-half-hours I would have given the movie two stars out of four.

To conclude, if you get one thing from this essay, it is this: Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote…

  • K.R.K.

Send comments to: nuqum@protonmail.com

Why the Season 2 finale of ‘The Mandalorian’ matters to so many of us

[Headline graphic: The Mandalorian (Graphic by Gambo7; used under the CCA-Share Alike 4.0 Int’l license]

By Kent R. Kroeger (Source: NuQum.com; December 27, 2020)

________________________________

“As in the case of many great films, maybe all of them, we don’t keep going back for the plot.” – Martin Scorsese

“I don’t care about the subject matter; I don’t care about the acting; but I do care about the pieces of film and the photography and the soundtrack and all of the technical ingredients that made the audience scream. I feel it’s tremendously satisfying for us to be able to use the cinematic art to achieve something of a mass emotion.” – Alfred Hitchcock

________________________________

After over 55-plus years, I can count on two hands and a couple of toes the number of times I’ve cried watching a movie or TV program.

I cried when Mary Tyler Moore turned off that lights at WJM-TV.

I cried when Radar O’Reilly announced Colonel Henry Blake’s death.

I cried when the U.S. Olympic hockey team beat the Soviets in 1980.

I cried when Howard Cosell told us that John Lennon had been killed.

I cried when ET said goodbye to Elliot.

I cried when the Berlin Wall came down in 1989.

I cried when baby Jessica was pulled from a 22-foot well.

I cried when Mandy Moore’s character dies at the end of “A Walk to Remember.”

I cried when Harry Potter and his wife sent their son off to Hogwarts.

I cried when Barack Obama became our 44th president.

I cried when the 33 Chilean miners were rescued.

I cried when the Chicago Cubs won the 2016 World Series.

But I can’t remember crying harder than while watching this season’s final episode of Disney’s “The Mandalorian,” when Luke Skywalker rescues Grogu (more popularly known as ‘Baby Yoda’) from the Empire’s indefatigable, post-Return of the Jedi remnants.

Since its December 18th release on Disney+, YouTube has been flooded with “reaction” videos of Star Wars fans as they watched a CGI-version of a young Luke Skywalker (Mark Hamill) remove his hood before Grogu’s caretaker, Din Djarin (a.k.a., The Mandalorian), and offers to train Grogu in the ways of The Force.

The “reaction” videos range from the highly-staged to the very charming and personal — all are illustrative of the deep affection so many people have for the original Star Wars characters, particularly Luke Skywalker.

For me, however, it is hard to detach from this emotional, collective experience the knowledge that it never would have happened if Lucasfilm (i.e., Disney), under the leadership of Kathleen Kennedy, hadn’t completely botched the Disney sequel movies, starting with “The Force Awakens,” director J. J. Abrams’ visually stunning but soulless attempt at creating a new Star Wars myth, followed by “The Last Jedi,” director Rian Johnson’s inexplicable platform for pissing on the original Star Wars mythos, and ending with “The Rise of Skywalker,” J.J. Abrams’ failed attempt to undo Johnson’s irreparable damage (along with the desecration Abrams himself laid upon the Star Wars brand with “The Force Awakens”).

Though opinions vary among Star Wars fans as to the extent Disney has alienated its core Star Wars audience, almost all agree that Disney’s most unforgivable sin was disrespecting the character of Luke Skywalker, who had been defined during  George Lucas’ original Star Wars trilogy as an incurable optimist with an unbreakable loyalty to his family and friends (Princess Leia Organa and Han Solo).

We cried at Season 2’s end of the “The Mandalorian,” not just for the beauty of the moment, but also because of the depth of Disney and Lucasfilm’s betrayal.

Actor Mark Hamill, himself, as he promoted (!) “The Last Jedi,” perfectly described the cultural vandalism perpetrated by Kennedy, Abrams and Johnson on Luke Skywalker:

“I said to Rian (Johnson), Jedis don’t give up. I mean, even if he had a problem he would maybe take a year to try and regroup, but if he made a mistake he would try and right that wrong. So, right there we had a fundamental difference, but it’s not my story anymore, it’s somebody else’s story and Rian needed me to be a certain way to make the ending effective…This is the next generation of Star Wars, so I almost had to think of Luke as another character — maybe he’s ‘Jake Skywalker.’ He’s not my Luke Skywalker.”

That is not exactly what Johnson wanted to hear from one of his “Last Jedi” actors just as the movie was being released. But Hamill’s words spoke for many long time Star Wars fans.
In fact, many of us believe Disney and Lucasfilm’s Kennedy, with ruthless premeditation, intended to use the Disney sequel movies to malign Lucas’ Star Wars characters (with the exception of Princess Leia) in favor of the Disney-ordained Star Wars cast: Rey Palpatine, Kylo Ren (Ben Solo), Poe Dameron, and Finn.
I’m fairly confident in this prediction: Nobody 10, 20 or 30 years from now is going to care about Rey, Kylo, Poe and Finn. But I’m 99 percent sure we’ll still be talking about Luke Skywalker, if only in recalling how Disney f**ked up one of the most iconic heroes in movies history. Rey inspires no one — including young girls, who apparently were Lucasfilm’s targeted demo with the Disney sequel movies.
Had Disney trusted their own market research, they would have known the only reliable target was the tens of millions of original Star Wars fans (and their children and grandchildren), whose loyalty to Star Wars was proven when they still showed up at theaters for Disney’s three sequel movies, even after their devotion was insulted with the unnecessary diminution of the once dashing and heroic Han Solo (Harrison Ford) and, of course, Luke.
Had Disney treated their core audience with respect, Star Wars fans now might be anticipating Rey’s next cinematic adventure, instead of drowning themselves in the bittersweet giddiness of Luke’s triumphant return on “The Mandalorian.”
To be sure, a lot of Star Wars fans want to put Luke’s return in its proper perspective. We still have to accept that — under the Disney story line — Luke is destined to slump off to a remote island, drinking titty-milk from the teet of a giant alien sea cow while whining that he couldn’t stop his nephew from killing off Luke’s young Jedi pupils (including presumably Grogu).
Despite the joyousness of Luke on “The Mandalorian,” the dark cloud of Abrams and Johnson’s bad storytelling skills still looms large.
But even the biggest Disney critics are allowing themselves to enjoy what Jon Favreau and David Filoni — the creative team behind “The Mandalorian” — are doing for the fans.
One such person is Nerdrotic (Gary Buechler), the bearded crown prince of the amorphous  Fandom Menace — a term used to describe a social-media-powered subculture of disgruntled Star Wars fans who particularly aggrieved at how Lucasfilm has dismantled Star Wars canon, allegedly using the Star Wars brand to pursue a “woke” political agenda at the expense of good storytelling.
“For the first time in a long time, the majority of the fans were happy, and the question you have to ask upfront is, ‘Disney, was it really that hard to show respect to the hero of generations, Luke Skywalker?'” says Buechler. “It must have been, because it took them 8 or 9 years to do it, but when they did do it, it sent a clear message that people still want this type of storytelling, and in this specific case, they want Luke Skywalker because he is Star Wars.”
For me, Luke’s return in “The Mandalorian” is a reminder that great moments are what make movies (and TV shows) memorable, not plot or story lines. People love and remember moments.
As someone who camped out in a dirty theater alleyway in Waterloo, Iowa in the Summer of 1977 to see a movie that was then just called “Star Wars,” I am going to enjoy what Favreau and Filoni gave us on “The Mandalorian” — the moment where the Luke Skywalker we love and remember from childhood returned to Star Wars.
– K.R.K.
Send comments to: nuqum@protonmail.com

Postscript: In recent days, Lucasfilm and Disney social media operatives have been posting messages reminding us that Luke Skywalker himself, Mark Hamill, is a “fan” of the Disney sequel movies, including Rian Johnson’s “The Last Jedi.”

Perhaps that is true. But I also believe Hamill has made it clear in the past couple of days where his heart resides — with the George Lucas’ Luke Skywalker:

News media bias and why its poorly understood

By Kent R. Kroeger (Source: NuQum.com; December 21, 2020)

Few conversation starters can ruin an otherwise pleasant dinner party (or prevent you from being invited to future ones) than asking: Is the news media biased?

If you ask a Democrat, they will tell you the Fox News Channel is the problem (“They started it!” as if explaining to an elementary school teacher who threw the first punch during a playground fight). Ask a Republican and they will say Fox News is just the natural reaction to the long-standing, pervasive liberal bias of the mainstream media.

This past presidential election has poured gasoline on the two arguments.

In late October, the Media Research Council (MRC), a conservative media watchdog group, released research showing that, between July 29 and October 20, 92 percent of evaluative statements about President Trump by the Big Three evening newscasts (ABC, CBS, NBC) were negative, compared to only 34 percent for Democratic candidate Joe Biden. Apart from a few conservative-leaning news outlets, such as the Fox News Channel and The Wall Street Journal, the MRC release was ignored.

When I shared the MRC research with my wife, her reaction was probably representative of many Democrats and media members: “Why wasn’t their coverage 100 percent negative towards Trump?”

The MRC doesn’t need me to defend their research methods, except I will point out that how they measure television news tone has a long history within media research, dating back to groundbreaking research by Michael J. Robinson and Margaret A. Sheehan, summarized in their 1981 book, “Over the Wire and on TV: CBS and UPI in Campaign ‘80.

Here is MRC’s description of their news tone measurement method:

“MRC analysts reviewed every mention of President Trump and former Vice President Biden from July 29 through October 20, 2020, including weekends, on ABC’s World News Tonight, the CBS Evening News and NBC Nightly News. To determine the spin of news coverage, our analysts tallied all explicitly evaluative statements about Trump or Biden from either reporters, anchors or non-partisan sources such as experts or voters. Evaluations from partisan sources, as well as neutral statements, were not included.

As we did in 2016, we also separated personal evaluations of each candidate from statements about their prospects in the campaign horse race (i.e., standings in the polls, chances to win, etc.). While such comments can have an effect on voters (creating a bandwagon effect for those seen as winning, or demoralizing the supports of those portrayed as losing), they are not “good press” or “bad press” as understood by media scholars.”

Besides the MRC, there is another data resource on news coverage tone. It is called the Global Database of Events, Language, and Tone (GDELT) Project and was inspired by the automated event-coding work of Georgetown University’s Kalev Leetaru and political scientist Philip Schrodt (formerly of Penn State University).

The GDELT Project is described as “an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day.”

[For a description of the many datasets available through GDELT, you can go here.]

The GDELT Project’s goals are ambitious to say the least, but perhaps may shed some light on the tone of news coverage during this past presidential election.

It is worth a look-see.

For the following analysis, I queried GDELT’s Global Online News Coverage database, filtering it down to US-only daily news articles that mention either Joe Biden or Donald Trump (but not both) from January 15, 2017 to November 22, 2020.

[The APIs used to query the GDELT database are available in this article’s appendix.]

Resulting from these queries were two metrics for each candidate: The first was the daily volume of online news coverage (measured as the percent of monitored articles), and the second was the average daily tone of online news coverage.

The second metric deserves some additional explanation.

GDELT uses Google’s Natural Language API to inspect a given text and identify the prevailing emotional opinion within the text, especially to determine a writer’s attitude as positive, negative or neutral. A text with a summary score over zero indicates that it was positive in overall tone. The higher a score, the more positive the text’s tone. Similarly, negative values indicate an overall negative tone. Values near zero indicate a text that is either neutral (i.e., no clear tone) or contains mixed tones (i.e., both positive and negative emotions).

For each news article, tone is calculated at the level of the entire article, not the tone of the sentence(s) mentioning Biden or Trump, so a negative article with a positive mention of Biden or Trump will still be scored negative. Finally, online news articles that mentioned both Biden and Trump were excluded from the analysis (83% of Biden articles mentioned both candidates, while only 9% of Trump articles did). In total, 4,593 online news articles were analyzed.

The resulting time-series data set contained five variables: (1) Date, (2) Daily Volume of Biden-focused Online News Coverage, (3) Average Daily Tone of Biden-focused Online News Coverage, (4) Daily Volume of Trump-focused Online News Coverage, and (5) Average Daily Tone of Trump-focused Online News Coverage.

From this data, I computed Biden’s net advantage in online news coverage tone by multiplying, for each candidate, the day’s news volume by the average news coverage tone. Trump’s volume-weighted news coverage tone was then subtracted from Biden’s.

Figure 1 (below) shows Biden’s net advantage in online news coverage tone from January 15, 2017 (near the beginning of Trump’s presidential term) to November 22, 2020.

Figure 1: Biden’s Tone Advantage over Trump in US Online News Coverage

Image for post

According to the GDELT data, the tone of Biden-focused US online news coverage was far more positive than Trump-focused news coverage. In fact, online news coverage never favored Trump — not for one single day!

While there has been significant variation in Biden’s tone advantage since 2017— most notably since August 2020 when Biden has seen his tone index advantage decrease from 8.9 to 1.3 by late November 2020 — it is remarkable that even when the U.S. economy was booming in late 2019, well before the the coronavirus pandemic had impacted the US, Biden was enjoying a significant advantage in online news tone.

Supporting the validity of the GDELT tone data, variation in Biden’s tone advantage fluctuates predictably with known events that occurred during the 2020 campaign.

In a March 25, 2020, interview with Katie Halper, former Biden staff member Tara Reade alleged that Biden had pushed her against a wall, kissed her, put his hand under her skirt, penetrated her with his fingers, and asked, “Do you want to go somewhere else?”

Beyond this allegation, there is only circumstantial evidence supporting Reade’s charge against Biden. Still, the impact of this allegation manifests itself in how Biden’s tonality advantage varied over time.

On March 25, 2020, Biden enjoyed a 7.7 tonality advantage over Trump. That advantage, however, immediately fell in the weeks following Halper’s Reade interview, reaching a relative low of 5.4 on May 11th.

Soon after, Biden’s tonality advantage began to recover rapidly, likely due to two major news stories in May. The first, on May 8th, marked the release of U.S. unemployment data showing the highest unemployment rate (14.7%) since the Great Depression, mainly due to job losses from the COVID-19 pandemic. These new economic numbers put the Trump administration in a clear defensive position, despite the fact that similar pandemic-fueled economic declines were occurring in almost every major economy in the world.

On May 25th, the second event — the death of George Floyd while being physically immobilized by a Minneapolis police officer — sparked a national outcry against police violence against African-Americans. Whether this outrage should have been directed at Trump (as it was by many news outlets) will be a judgment left to historians. What can be said is that Biden’s tone advantage over Trump trended upwards into the summer, reaching an 2020 peak of 9.0 on July 25th.

In the post-convention media environment, which included intermittent media coverage of the Hunter Biden controversy, Biden’s tone advantage declined for the remainder of the time covered in this analysis.

Admittedly, the GDELT data is imperfect in that it does not allow analysis at a sentence- or paragraph-level. Still, the finding in Figure 1 that Biden-focused news articles have been far more positive than Trump-focused news articles is consistent with the overall finding in the MRC tonal analysis of the 2020 presidential election.

Is this conclusive evidence of the news media’s anti-Trump bias? No. But it should inspire a further inquiry into this question, and to do that will require some methodological finesse. That is, it will require far more than just measuring the tone of news coverage.

In a country where President Trump’s approval rating has hovered between 40 and 46 percent through most of his presidency, the fact that — at a critical time in the election — the network TV news programs were over 90 percent negative towards Trump offers some face validity to the anti-Trump media bias argument.

But my wife’s gut reaction to the MRC research contains a profound point: What if Trump deserved the overwhelming negative coverage? After all, is it the job of the news media to reflect public opinion? To the contrary, by definition, an objective news media should be exclusively anchored to reality, not ofttimes fickle variations in public sentiment.

Subsequently, the central problem in measuring media bias is finding a measure of objective reality by which to assess a president’s performance. Most everything we hold a president accountable for — the economy, foreign policy, personal character, etc. — is subject to interpretations and opinions that are commonly filtered by the news media through layers of oversimplifications, distortions and other perceptual biases.

Perhaps we can use a set of proxy measures? The unemployment rateGross domestic product growthStock prices. A president’s likability score. But to what extent does a president have an impact on those metrics? Far less than we may want to believe.

And now add to the equation a global pandemic for which Trump’s culpability, though widely asserted in the national news media, is highly debatable but reckless to dismiss out-of-hand.

How can the U.S. news media possibly be equipped to judge a president’s performance by any objective, unbiased standard?

It isn’t. And, frankly, it is likely the average American doesn’t require news organizations to be so equipped. Despite survey evidence from Pew Research suggesting news consumers dislike partisanship in the news media — most likely an artifact of the social desirability bias commonly found in survey-based research — recent studies also show U.S. news consumers choose their preferred news outlets through partisan and ideological lenses.

According to political scientist Dr. Regina Lawrence, associate dean of the University of Oregon’s School of Journalism and Communication and research director for the Agora Journalism Center, selection bias is consciously and unconsciously driving news consumers towards news outlets that share similar partisan and ideological points of view — and, in the process, increases our country’s political divide:

“Selective exposure is the tendency many of us have to seek out news sources that don’t fundamentally challenge what we believe about the world. We know there’s a relationship between selective exposure and the growing divide in political attitudes in this country. And that gap is clearly related to the rise of more partisan media sources.”

The implication of this dynamic on how journalists do their jobs is significant. There is little motivation across all levels of the news-gathering process — from the corporate board room down to the beat reporter — to put an absolute premium on covering news stories from an objective point of view.

Instead, journalists and media celebrities are motivated by the same psychological and economic forces as the rest of us: career advancement, prestige and money. And to succeed in the news business today, a journalist’s output must fit within the dominant (frequently partisan) narratives of his or her news organization.

In a trailblazing data-mining-based study by the Rand Corporation on how U.S. journalism has changed since the rise of cable news networks and social media, researchers found “U.S.-based journalism has gradually shifted away from objective news and offers more opinion-based content that appeals to emotion and relies heavily on argumentation and advocacy.”

And the result of this shift? Viewership, newsroom investments and profits at the three major cable news networks have significantly increased in the past two decades, at the same time that news consumers have shifted their daily news sources away from traditional media (newspapers and TV network news) towards new media outlets (online publications, news aggregators [e.g., Drudge Report], blogs, and social media). In 2019, the major U.S. media companies — which include assets and revenue streams far beyond those generated from their news operations — had a total market capitalization exceeding $930 billion.

Why then should we be surprised that today’s broadcast and print journalists are not held to a high objectivity or accuracy standard? Their news organizations are prospering for other reasons.

During the peak of the Russiagate furor, as many journalists were hiding behind anonymous government sources, few journalists and producers at CNN, MSBNC, The New York Times or Washington Post openly challenged the basic assumptions of that conspiracy theory which asserted that Trump had colluded with the Russians during the 2016 election — a charge that, in the end, proved baseless.

Apart from ABC News chief investigative Brian Ross being disciplined for his false reporting regarding Trump campaign contacts with Russia, I cannot recall a single national news reporter being similarly disciplined for bad Russiagate reporting (and there was a lot of bad Russiagate reporting).

On the other side of the coin, there are certainly conservative news outlets where effusive Trump coverage is encouraged, but those cases are in the minority compared to the rest of the mainstream media (a term I despise as I believe the average national news outlet actively restricts the range of mainstream ideas presented to the news consuming public — and, furthermore, there is nothing ‘mainstream’ about the people who populate our national news outlets).

Being from Iowa, I’ve been spoiled by the number of times I’ve met presidential candidates in person. That, however, is not how most Americans experience a presidential election.

Americans generally experience presidential elections via the media, either through direct exposure or indirect exposure through friends, family and acquaintances; consequently, this potentially gives the news media tremendous influence over election outcomes.

According to Dr. Lawrence, the most significant way the news media impacts elections is through who and what they cover (and who and what they don’t cover). “The biggest thing that drives elections is simple name recognition.”

If journalists refuse to cover a candidate, their candidacy is typically toast. But that is far from the only way the news media can influence elections. How news organizations frame an election — which drives the dominant media narratives for that election — can have a significant impact.

The most common frame is that of the horse race in which the news media — often through polling and judging the size and enthusiasm of crowds —can, in effect, tell the voting public who is leading and who has the best chance of winning.

“We know from decades of research that the mainstream media tend to see elections through the prism of competition,” according to Lawrence. “Campaigns get covered a lot like sports events, with an emphasis on who’s winning, who’s losing, who’s up, who’s down, how they are moving ahead or behind in the polls.”

There are other narratives, however, that can be equally impactful — such as narratives centered on a candidate’s character (e.g., honesty, empathy) or intellectual capacity.

Was Al Gore as stiff and humorless as often portrayed in the 2000 campaign? Was George W. Bush as intellectually lazy or privileged as implied in much of the coverage from that same campaign?

Even journalists with good intentions can distort reality when motivated to fit their stories into these premeditated story lines.

More ominous, however, is that possibility that news organizations with strong biases against a particular candidate or political party, as they can manipulate their campaign coverage in such a way that even objective facts can be framed to systematically favor the voter impressions formed for one candidate over another.

Did that happen in the 2020 presidential election? My inclination is to say yes, but I go back to the original question posed in this essay: Did Donald Trump deserve the overwhelming negative coverage he received across large segments of the national news media?

Without clearly defining and validly measuring the objective, unbiased metrics by which to answer that question, there is no possible way to give a substantive response.

  • K.R.K.

Send comments to: nuqum@protonmail.com

GDELT API for Biden:

https://api.gdeltproject.org/api/v2/summary/summary?d=web&t=summary&k=Biden+-Trump&ts=full&fsc=US&svt=zoom&stt=yes&stc=yes&sta=list&

GDELT API for Trump:

https://api.gdeltproject.org/api/v2/summary/summary?d=web&t=summary&k=Trump+-Biden&ts=full&fsc=US&svt=zoom&stt=yes&stc=yes&sta=list&

Our fascination with “The Queen’s Gambit”

[Headline photo: Judit Polgár, a chess super grandmaster and generally considered the greatest female chess player ever (Photo by Tímea Jaksa)]

By Kent R. Kroeger (Source: NuQum.com; December 8, 2020)

One the greatest joys I’ve had as a parent is teaching my children how to play chess.

And the most bittersweet moment in that effort is the day (and match) when they beat you and you didn’t let them.

I had that moment during the Thanksgiving weekend with my teenage son when, in a contest where early on I sacrificed a knight to take his queen (and maintained that advantage for the remainder of the contest), I became too aggressive and left my king vulnerable. As I realized the mistake, it was two moves too late. He pounced and mercilessly ended the match.

He didn’t brag. No teasing. Not even a firm handshake. He checkmated me, grabbed a bowl of blueberries out of the fridge, and coolly went to the family room to play Call of Duty with his friends on his Xbox.

I was left with an odd feeling, common among parents and teachers, I suspect. A feeling of immense pride, even as my ego was genuinely bruised.

That is the nature of chess — a game that is both simple and infinitely complex, and offers no prospect of luck for the casual or out-of-practice player. With every move there are only three possibilities: You can make a good decision, a bad decision, or maintain the status quo.

For this, I love and hate chess.

Saying ‘Chess has a gender problem’ is an understatement

My father taught me chess, as his father taught him.

Growing up in the 70s, I had a picture of grandmaster legend Bobby Fischer pasted on my bedroom door, whose defeat of Boris Spassky for the 1972 world chess title ranks with the U.S. hockey team’s 1980 “Miracle on Ice” as one of this country’s defining Cold War “sports” victories over the Soviet Union. That is not an exaggeration.

In part because of Fischer’s triumph, I played chess with my childhood friends more often than any other game, save perhaps basketball and touch football.

But despite chess being prominent in my youth, I have no memories of playing chess with members of the opposite sex. My mom? Bridge was her game of choice. The girls in my neighborhood and school? I cannot recall even one match with them. Granted, between the 5th and 11th grades, I didn’t interact with girls much for any reason. And when I did later in high school, by then my chess playing was mostly on hold until graduate school, except for the occasional holiday matches with my father and brothers.

Any study on the sociohistorical determinants of gender-based selection bias should consider chess an archetype of this phenomenon.

The aforementioned chess prodigy, Bobby Fischer, infamously said of women: “They’re all weak, all women. They’re stupid compared to men. They shouldn’t play chess, you know. They’re like beginners. They lose every single game against a man.”

Any pretense that grandmaster chess players must, by means of their chessboard skills, also be smart about everything else, is easily dispensed by referencing the words that would frequently come out of Fischer’s mouth when he was alive.

Radio personality Howard Stern once observed that many of the most talented musicians he’s interviewed often lack the ability (or confidence) to talk about anything else except music (and perhaps sex and drugs): “You can’t become that good at something without sacrificing your knowledge in other things.”

That may be one of the great sacrifices grandmaster chess players also make. As evidenced by his known comments on women and chess, Fischer is ill-informed on gender science. Even some contemporary chess greats, such as Garry Kasparov and Nigel Short, have uttered verbal nonsense on the topic.

Writer Katerina Bryant recently reflected on the persistent ignorance within the chess world about gender: “Many of us mistake chess players for the world’s best thinkers, but laying out a champion’s words on the table make the picture seem much more fractured. It’s a fallacy that someone can’t be both informed and ignorant.”

Why we are watching the “The Queen’s Gambit”

Over the Thanksgiving holiday, the intersection in time of losing to my son at chess and the release of Netflix’ new series, The Queen’s Gambit, seemed like an act of providence. The two events reignited my interest in chess and launched a personal investigation into the game’s persistent and overwhelming gender bias.

Of the current 80 highest-rated chess grandmasters, all are men. The highest-rated woman is China’s Hou Yifan (#82).

That ain’t random chance. Something is fundamentally wrong with how the game recruits and develops young talent. The Queen’s Gambit, a fictional account of a young American woman’s rise in the chess world during the 1950s and 60s, speaks directly to that malfunction. While the show mostly focuses on drug addiction, dependency, and emotional alienation, I believe its core appeal is in how it addresses endemic sexism — in this case, the gender bias of competitive chess.

For those that don’t know, The Queen’s Gambit is a 7-part series (currently streaming on Netflix) about Beth Harmon, an orphaned chess prodigy from Kentucky who rises to become one of the world’s greatest chess player while struggling with emotional problems and drug and alcohol dependency. Beth is a brilliant, intuitive chess player…and a total mess.

The Queen’s Gambit is more fun to watch than it sounds. My favorite scene is in Episode 1 when, as a young girl, she plays a simultaneous exhibition against an entire high school chess club and beats them — all boys, of course — easily.

The TV series not only perfectly aligns with the current political mood, its gender bias message is not heavy-handed and is easily digestible. It is fast food feminism for today’s cable news feminists.

Aside from the touching characterization of Mr. Shaibel, the janitor at Beth’s orphanage who introduces her to chess, almost every other man in The Queen’s Gambit is either sexist, a substance abuser, arrogant, or emotionally stunted. What saves The Queen’s Gambit’s cookie-cutter politics from becoming overly turgid and preachy, however, is that Beth isn’t much better, or at least not until the end. Anya Taylor-Joy, the actress who plays Beth, is brilliant throughout the series and alone makes the show’s seven hours running time worth the personal investment.

But beyond the show’s high-end acting and production values, its hard not to enjoy a television show about a goodhearted but dysfunctional protagonist (i.e., substance addicted) who must interact with other dysfunctional people in an equally dysfunctional time?

The audience gets all of that from The Queen’s Gambit, along with a thankfully minor but clunky anti-evangelical Christian, anti-anti-Communism side story.

It is the perfect formula for getting love from the critics and attracting an audience, and the reviews and ratings for The Queen’s Gambit prove the point:

(The Queen’s Gambit) is the sort of delicate prestige television that Netflix should be producing more often.” – Richard Lawson, Vanity Fair

Just as you feel a familiar dynamic forming, in which a talented woman ends up intimidating her suitors, The Queen’s Gambit swerves; it’s probably no coincidence that a story about chess thrives on confounding audience expectations.” – Judy Berman, TIME Magazine

The audiences have lined up accordingly. In its first three weeks after release (from October 23 to November 8), Nielsen estimates that The Queen’s Gambit garnered almost 3.8 billion viewing minutes in the U.S. alone.

As the streaming TV ratings methodologies are still in their relative infancy, I prefer to look at Google search trends when comparing media programs and properties. In my own research, I have found that Google searches are highly correlated with Nielsen’s streaming TV ratings (ranging between a Pearson correlation of 0.7 and 0.9 from week-to-week).

Figure 1 shows a selection of the most popular streaming TV shows in 2020 and their relative number of Google searches (in the U.S.) from July to early December [The Queen’s Gambit is the dashed black line.]. While The Queen’s Gambit hasn’t had the peak number of searches as some other shows (The Umbrella Academy, Schitt’s Creek, and The Mandalorian), it has sustained a high level of searching over a longer period of time. Over a 5-week period, only The Mandalorian has had a cumulative Google Trends Index higher than The Queen’s Gambit (1,231 versus 1,150, respectively). The next highest is The Umbrella Academy at 1,030.

Figure 1: Google Searches for Selected Streaming TV Shows in 2020

Whatever the core reason for the popularity of The Queen’s Gambit, there is no denying that the show has attracted an unprecedented mass audience for a streaming TV series.

It makes me think Netflix might consider making more than seven episodes.

My one big annoyance with “The Queen’s Gambit”

Mark Twain once said of fiction: “Truth is stranger than fiction, but it is because fiction is obliged to stick to possibilities; truth isn’t.”

While I appreciate Twain’s sentiment — fiction should not be too restrained by truth — he clearly never had to explain to his teenage son that fire-breathing dragons did not exist in the Middle Ages, despite what Game of Thrones suggests. I fear The Queen’s Gambit will launch of generation of people who think an American woman competed internationally in chess in the 1960s and thereby diminish the much more profound accomplishments of an actual female chess prodigy, Judit Polgár, arguably the greatest female chess player of all time and who famously refused to compete in women-only chess tournaments.

While her achievements would occur two decades after Beth Harmon’s fictional rise, Polgár, a 44-year-old Hungarian, fought the real battle and offers the more substantive and entertaining story, in my opinion. It would be like Hollywood making a movie about a fictional female law professor defeating institutional sexism and rising to the Supreme Court in the 1960s, when the true story of Ruth Bader Ginsburg already exists.

Judit Polgár, a chess super grandmaster, demonstrating the “look” (Photo by Stefan64)

Granted, Walter Tevis’ book upon which The Queen’s Gambit is based was published in 1983, before Polgár was competing internationally, but for someone who followed Polgár’s career, The Queen’s Gambit‘s inspirational tale rings a bit hollow.

No American woman was competing at the highest levels of chess during The Queen’s Gambit time frame of the 1950s and 60s. In stark contrast, Polgár actually did that from 1990 to 2014, achieving a peak rating of 2735 in 2005 — which put her at #8 in the world at the time and would place her at #20 in the world today.

Perhaps only chess geeks understand how rare such an accomplishment is in chess, regardless of gender.

Polgár, at the age of 12, was the youngest chess player ever, male or female, to become ranked in the World Chess Federations’s top 100 (she was ranked #55) and became a Grandmaster at 15, breaking the youngest-ever record previously held by former World Champion Bobby Fischer.

The list of current or former world champions Polgár has defeated in either rapid or classical chess is mind-blowing: Magnus Carlsen (the highest rated player of all time), Anatoly KarpovGarry KasparovVladimir KramnikBoris SpasskyVasily SmyslovVeselin TopalovViswanathan AnandRuslan PonomariovAlexander Khalifman, and Rustam Kasimdzhanov. [I’m geeking out just reading these names.]

And most amazing of all — Polgár may not be the best chess player in her family.

Still, I want to be clear: I loved the The Queen’s Gambit. I don’t sit up at 3 a.m. watching Netflix on my iPhone unless I have a good reason — such as watching Czech porn. I merely offer a piece of criticism so that some poor sap 20 years from now doesn’t think The Queen’s Gambit is based on a true story. It most certainly isn’t. It is a pure Hollywood-processed work of fiction.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

 

Did analytics ruin Major League Baseball?

[Headline graphic: Billy Beane (left) and Paul DePodesta (right). The General Manager and assistant General Manager, respectively, for the 2002 Oakland A’s and who inspired Michael Lewis’ book, “Moneyball.” (Photo by GabboT; used under the CCA-Share Alike 2.0 Generic license.)]

By Kent R. Kroeger (Source: NuQum.com; November 25, 2020)

As he announced his resignation from the Chicago Cubs as that organization’s president of baseball operations in November, Theo Epstein, considered by many the High Priest of modern baseball analytics, made this shocking admission about the current state of baseball:

“It is the greatest game in the world but there are some threats to it because of the way the game is evolving. And I take some responsibility for that because the executives like me who have spent a lot of time using analytics and other measures to try to optimize individual and team performance have unwittingly had a negative impact on the aesthetic value of the game and the entertainment value of the game. I mean, clearly, you know the strikeout rates are a bit out of control and we need to find a way to get more action in the game, get the ball in play more often, allow players to show their athleticism some more and give the fans more of what they want.”

Epstein’s comments were painful for me on two fronts. First, he was leaving the only baseball team I’ve ever loved, having helped the Cubs win the only World Series championship of my lifetime. Second, he put a dagger in the heart of every Bill James and sabermetrics devotee who, like myself, have spent countless hours pouring through the statistical abstracts for Major League Baseball (MLB) and the National Football League on a quest to build the perfect Rotisserie league baseball team and fantasy football roster.

There is no better feeling than the long search and discovery for those two or three “value” players who nobody else thinks about and who can turn your Rotisserie or fantasy team into league champs.

In a direct way, sports analytics are the intellectual steroids for a generation of sports fans (slash) data geeks who love games they never played beyond high school, if even then.

Epstein’s departure was not entirely a surprise. The Cubs have not come close to their glorious World Series triumph in 2016—though it has to pin that on Epstein. The Cubs still have (when healthy) one of the most talented rosters in baseball. Instead, the surprise was Epstein’s targeting ‘analytics’ has one of the causes of baseball’s arguable decline.

Like many baseball fans, I’ve assumed baseball analytics—immortalized in Michael Lewis’ book “Moneyball” about the 2002 Oakland A’s and its general manager Billy Beane, who hired  a Yale economics grad, Paul DePodesta, to assist him in building a successful small market (i.e., low payroll) baseball team—helped make the MLB, from top-to-bottom, more competitive.

In the movie based on Lewis’ book, starring Brad Pitt and Jonah Hill, this scene perfectly summarizes the value of analytics in baseball (and, frankly, could apply to almost every major industry):

Peter Brand (aka. Paul DePodesta, as played by Jonah Hill):

“There is an epidemic failure within the game to understand what is really happening and this leads people who run major league baseball teams to misjudge their players and mismanage their teams…

…People who run ball clubs think in terms of buying players. Your goal shouldn’t be to buy players. Your goal should be to buy wins, and in order to buy wins you need to buy runs.

The Boston Red Sox see Johnny Damon and they see a star who’s worth seven-and-a-half million dollars. When I see Johnny Damon what I see is an imperfect understanding of where runs come from. The guy’s got a great glove and he’s a decent lead-off hitter. He can steal bases. But is he worth seven-and-a-half million a year?

Baseball thinking is medieval. They are asking all the wrong questions.”

While Beane and DePodesta may have lacked world championships after they introduced analytics into the process, the A’s did have nine winning seasons from 2002 to 2016 during their tenure, which is phenomenal for a small-market, low-payroll team.

At the team-level, the 21st-century A’s are the embodiment of how analytics can help an organization.

But is Epstein still right? Has analytics hurt baseball at the aggregate level?

Let us look at the facts…

Major League Baseball has a Problem

Regardless of the veracity of Epstein’s indictment of analytics for its net role in hurting the game of baseball, does professional baseball have a problem?

The answer is a qualified ‘Yes.’

These two metrics describe the bulk of the problem: (1) Average per game attendance and (2) World Series TV viewership. Since the mid-1990s, baseball game attendance relative to the total U.S. population has been in a near constant decline, going from a high of 118 game attendees (per 1 million people) in the mid-1990s to 98 game attendees (per 1M) in the late 2010s (see Figure 1). At the same time, the long-term trend is still positive. That cannot be discounted.

Figure 1: MLB per game attendance (per 1 million people) (Source: baseball-reference.com)

While the relative decline is significant, the real story of MLB attendance since the league’s inception in the late-19th century is the surge in attendance after World War II, a strong decline after that until the late-1960s, and a resurgence during the 1970s and 80s. In comparison, the attendance decline per capita since the mid-1990s has been relatively small.

Consider also that despite a per capita decline in game attendance since the 1990s, total season attendance has still grown. In 1991, 56.8 million MLB tickets were sold; by 2017, 72.7 million tickets were sold. This increase in gross ticket sales has been matched by a steady rise in MLB ticket prices as well. The average cost of an MLB baseball game in 1991 was $142, but by 2017 that figure increased to $219 (a 176 percent increase). In that context, the 15 to 20 percent decline in game attendance (per capita) seems more tolerable and far from catastrophic. In fact, if it weren’t for this next metric, baseball might be in great shape, even if its relative popularity is in decline.

The TV ratings and viewership for MLB’s crown jewel event, the World Series, has been in a near straight-line decline since the mid-1970s when Billy Martin’s New York Yankees and the Tommy Lasorda-led Los Angeles Dodgers were the sport’s dominant franchises, and happened to be in this nation’s two largest cities. Big market teams in the World Series is always good for TV ratings.

As seen in Figure 2, average TV viewership for the World Series (the orange line) has declined from a high of 44.3 million in 1978 (Yankees vs. Dodgers) to just under 9.8 million in the last World Series (Dodgers vs. Rays).

Figure 2: The TV Ratings and Viewership (average per game) for the World Series since 1972 (Source: Nielsen Research)

Even with the addition of mobile and online streaming viewers—-which lifts the 2020 World Series viewership number to 13.2 million—the decline in the number of eyeballs watching the World Series since the 1970s has been dramatic.

In combination with the trends in game attendance, the precipitous decline in live viewership offers one clear conclusion: Relatively fewer people are going to baseball games or watching the them on TV or the internet. That’s a formula for an impending financial disaster among major league baseball franchises.

While stories of baseball’s imminent death are exaggerated, baseball does have serious problems. But what are they exactly? And how has analytics impacted those probable causes?

Are baseball’s problems bigger than the game itself?

Before looking within the game of baseball itself (and the role of analytics) to explain its relative popularity decline, we must consider the broader context.

Sports fans today demand something different from what MLB offers

Living with a teenage son who loves the NBA and routinely mocks my love of baseball, I see a generational divide that will challenge any attempt to update a sport once considered, without debate, to be America’s pastime. Kids (and. frankly, many of their parents) don’t have the patience or temperament to appreciate the deep-rooted intricacies of a game where players spend more time waiting than actually playing. Only 10 percent of a baseball game involves actual action, according to one study. For kids raised on Red Bull and Call of Duty, baseball is more like a horse and buggy than a Bugatti race car.

And the in-game data supports that assertion. In 1970. a nine-inning major league baseball game took, on average, two-and-a-half hours to complete. In 2020, it takes three hours and six minutes. By comparison, a World Cup soccer match takes one hour and 50 minutes from the moment the first whistle blows. An NBA game takes about two-and-a-half hours.

Baseball is too slow…and getting slower.

[For a well-constructed counterargument to the ‘too slow’ conclusion, I invite you to read this essay.]

In contrast, the NBA and World Cup soccer possess near constant action. Throw in e-games (if you consider those contests a sport) and it is reasonable to conjecture that baseball is simply a bad fit for the times. Even NFL football, whose average game takes over three hours, has challenges in that regard.

Did analytics lead to longer baseball games? Let us examine the evidence.

Figure 3 shows the long-term trend in the length of 9-inning MLB games divided into baseball ‘eras’ as defined by Mitchell T. Woltring, Jim K. Rost, Colby B. Jubenville in their 2018 research paper published by Sports Studies and Sports Psychology. They identified five distinct eras in major league baseball: (1) “Dead Ball” (1901 to 1919), (2) “Live Ball” (1920 to 1941), (3) “Integration” (1942 to 1960), (4) “Expansion” (1961 to 1976), (5) “Free Agency” (1977 to 1993), (6) “Steroids” (1994 to 2005) and (7) “Post-Steroids” (2006 to 2011). However, for this essay, I relabeled their ‘post-Steroids era’ as the ‘Analytics era’ and extended it to the present.

(Note: MLB game length was not consistently measured until the “Integration era.”)

Figure 3: Average length of a 9-inning MLB game since 1946.

Though I will share upon request the detailed statistical analysis of the intervention effects of the baseball eras on the average length of MLB games, the basic findings are straightforward:

(1) The average length of 9-inning MLB games significantly increased during the ‘Integration,’ ‘Free Agency,’ and ‘Analytic’ eras, but did not increase during the ‘Expansion’ and ‘Steroids’ eras.

(2) The long-term trend was already pointing up before the ‘Analytics era’ (+50 seconds per year), though analytics may have had a larger marginal effect on game length (+78 seconds per year).

As to why the ‘Analytics era’ saw an increase in game times, one suggested explanation is that the ‘Steroids era’ disproportionately rewarded juiced-up long-ball hitters who tended to spend less time at the plate. In contrast, though the ‘Analytics era’ also has emphasized home run hitting, the players hitting home runs are now more patient. According to baseball writer Fred Hofstetter, pitchers have also changed:

“This (increase in game times) won’t surprise anyone who follows the game closely. The general demographic change trending into 2020:

  1. Patient hitters are replacing free swingers
  2. Hard-throwing strikeout-getters are replacing pitch-to-contact types

Pitchers who throw harder tend to take more time between pitches.9 Smart hitters take more pitches. There are more pitches with more time between them. The result is a rising average of time between pitches.”

Are these changes in the game related to analytics? It is hard to know given the concurrent (and assumed) decline in steroid use in the 2000s MLB, but the apparent consensus is that the pitcher-batter dynamics since 2000 have been more sophisticated and time-consuming than during the ‘Steroid era.’

My conclusion on the impact of analytics on the length of MLB baseball games: Unclear.

Are there other aspects of baseball affected by analytics?

Investigating the role of analytics in 21st century baseball is complicated by the confounding effects of other changes going on in the game around the same time — the most obvious being MLB’s increased enforcement of its performance enhancing drug policies. But sports writer Jeff Rivers notes another ongoing trend: this country’s best athletes are increasingly choosing football and basketball over baseball, though this trend may have been going on for some time.

“Major League Baseball used to offer its athletes the most prestige, money and fame among our nation’s pro team sports, but that hasn’t been true for decades,” writes Rivers. “Consequently, Major League Baseball continues to lose in the competition for talent to other major pro team sports.”

It is also possible analytics have exacerbated this supposed decline in athlete quality by discouraging some of baseball’s most exciting plays.

“The focus on analytics in pro sports has led to more scoring in the NBA…but fewer stolen bases and triples, two of the game’s most exciting plays, in pro baseball,” asserts Rivers.

Is there really a distinct ‘Analytics era’ in baseball?

Another problem in assessing the role of baseball analytics is that the ‘Analytics era’ (what I’ve defined as 2006 to the present) may not be that distinct.

Henry Chadwick invented the baseball box score in 1858 and, by 1871, statistics were consistently recorded for every game and player in professional baseball. In 1964, Earnshaw Cook published his statistical analysis of baseball games and players and seven years later the Society for American Baseball Research (SABR) was founded.

In the early 1970s, as statistics advanced as a topic among fans, Baltimore Orioles player Davey Johnson was writing FORTRAN computer code on an IBM System/360 to generate statistical evidence supporting his belief that he should bat second in the Orioles lineup (his manager Earl Weaver was not convinced, however).

In 1977, Bill James published his first annual Baseball Abstracts which, through the use of complex statistical analyses, argued that many of the popular performance metrics — such as batting average — were poor predictors of how many runs a team would score. Instead, James and other SABRmetricians (as they would be called) argued that a better measure of a player’s worth is his ability to help his team score more runs than the opposition. Instead, the SABRmetricians initially preferred metrics such as On-Base Percentage (OBP) and Slugging Percentage (SLG) to judge player values and would later prefer combining those metrics to create the On-base Plus Slugging (OPS) performance metric.

[Note: OBP is the ratio of the batter’s times-on-base (TOB) (which is the sum of hits, walks, and number of times hit by pitch) to their number of plate appearances. SLG measures a batter’s productivity and is calculated as total bases divided by at bats. OPS is simply the sum of OBP and SLG.]

Batting averages and pitchers’ Earned-Run-Averages (ERA) have been a systematic part of player evaluations since baseball’s earlier days. Modern analytics didn’t invent most of the statistics used today to assess player value, but merely refined and advanced them.

Nonetheless, there is something fundamentally different in how MLB players values are assessed today than in the days before Billy Beane, Paul DePodesta and Moneyball.

But when did analytics truly take over the talent acquisition process in major league baseball? There is no single, well-defined date. However, many baseball analysts point to the 2004 Boston Red Sox, whose general manger was Theo Epstein, as the first World Series winner to be significantly driven by analytics.

Something unique and profound was going on in major league baseball’s front offices from the time between Billy Beane’s 2002 A’s and the Boston Red Sox’ 2007 World Series win, their second championship in four years.

By 2009, most major league baseball teams had a full-time analytics staff working in tandem with their traditional scouting departments, according to Business Administration Professor Rocco P. Porreca.

So, why did I pick 2006 as the start of the ‘Analytics era’? No definitive reason except that is roughly the halfway point between the release of Lewis’s book Moneyball and 2009, the point at which most major league baseball teams had stood up a formal analytics department. It would have been equally defensible to set 2011 or 2012 as the starting point for the ‘Analytics era’ as many of the aggregate baseball game measures we are about to look at changed direction at around that time.

The Central Mantra of Baseball Analytics: “He get’s on base”

Lewis’ book Moneyball outlined the baseball player attribute 2002 A’s assistant general manger Paul DePodesta’s sought after most when evaluating talent: Select players that can get on base.

This scene from the movie Moneyball drives home that point:

As the 2002 A’s scouting team identify acquisition prospects, the team’s general manger, Billy Beane singles out New York Yankees outfielder David Justice:

A’s head scout Grady Fuson:  Not a good idea, Billy.

Another A’s scout:  Steinbrenner’s so pissed at his decline that he’s willing to eat a big chunk of his contact just to get rid of him.

Billy Beane:  Exactly.

Fuson: Ten years ago, David Justice—big name. He’s been in a lot of big games. He’s gonna really help our season tickets early in the year, but when we get in the dog days in July and August, he’s lucky if he’s gonna hit his weight…we’ll be lucky if we get 60 games out of him. Why do you like him?

[Beane points at assistant general manager Peter Brand (aka. Paul DePodesta)]

Peter Brand: Because he get’s on base.

This was the fundamental conclusion analytic modelers started driving home to a growing number of baseball general managers after 2002.  Find players that can get on base.

And Theo Epstein was among the first general managers to drink the analytics Kool-Aid and he did it while leading one of baseball’s richest franchises — the Boston Red Sox. Shortly after the 2002 World Series, the Red Sox hired the 28-year-old Epstein, the youngest general manager in MLB history, to help them end their 86-year World Series drought. Two years later, the Red Sox and Epstein did just that, and one of the reasons cited for the Red Sox success was Epstein’s use of analytics for player evaluations. Eventually, Epstein would take his analytics to the Chicago Cubs in 2011, who then ended their 108-year championship drought five years later.

Until Epstein’s departure from the Cubs, there has been scant debate within baseball about the value of analytics. Almost every recent World Series champion– the Red Sox, Cubs, Royals, Astros, and others — has an analytics success story to tell. By all accounts, its here to stay.

So why on his way out the door in Chicago did Epstein throw a verbal grenade into the baseball fraternity by suggesting analytics have had “a negative impact on the aesthetic value of the game and the entertainment value of the game.” And he specifically cited the responsibility of analytics for the recent rise in strikeouts, bases-on-balls, and home runs (as well as a decline in stolen bases) as the primary cause of baseball’s aesthetic decline.

Is Epstein right? The short answer is: It is not at all clear baseball analytics are the problem, even if it did change the ‘aesthetics’ of the game.

A brief look at the data…

As a fan of baseball, I find bases-on-balls and strike outs near the top of my list of least favorite in-game outcomes.

But when we look at the long-term trends in walks and strike outs, its hard to pin the blame on analytics (see Figure 4). Strike outs in particular have been on a secular rise since the beginning of organized baseball in the 1870s, with only three periods of sustained decreases — the ‘Expansion era,’ ‘Live Ball’ and ‘Steroid’ eras. The ‘Analytics era’ emphasis on hard-throwing strike out pitchers over slower-throwing ‘location’ pitchers may be working (strike outs have gone from 6 to 9 per team per game), but it is part of baseball’s longer-term trend — baseball pitchers have become better at striking out batters since the sport’s beginning. The only times batters have caught up with pitching is when either the baseball itself was altered (“Live Ball era”), pitching talent was watered down (“Expansion era”) or the batters juiced up (“Steroids era”).

As for the rise in bases-on-balls, there is evidence of a trend reversal around 2012, with walks rising sharply between 2012 and 2020, the heart of the ‘Analytics era.’ At least tentatively, therefore, we can conclude one excitement-challenged baseball event has become more prominent, but even in this case, the current number of walks per team per game (= 3.5) is near the historical average. At the bases-on-balls peak in the late-1940s, baseball was at its apex in popularity and MLB attendance declined as bases-on-balls plummeted through the 1950s (see Figure 1).

Figure 4: Trends in Bases-on-Balls and Strike Outs in Major League Baseball since 1871.

It is difficult to blame baseball’s relative decline in popularity on increases in strike outs and walks or the role of analytics in those in-game changes.

But what about two of baseball’s most exciting plays — stolen bases and home runs? According to Epstein, the analytics-caused decline in stolen bases and concomitant rise in home runs has robbed the game of crucial action which help drive fan excitement.

As shown in Figure 5, there is strong evidence that the ‘Analytics era’ has seen a reversal in trends for both stolen bases and home runs. Since 2012, the number of home runs per team per game has risen from 0.9 to 1.3, and the number of steals per team per game has fallen from 0.7 to 0.5.

Stolen bases may be a rarity now in baseball, but they’ve never been common since the ‘Live Ball era,’ having peaked around 0.9 per team per game in the late 1980s. In truth, stolen bases have never been a big part of the game.

Home runs are a different matter. Epstein’s complaint that there are, today, too many home runs in baseball is a puzzling charge. In 45 years as a baseball fan, I’ve yet to hear a fan complain that his or her team hit too many home runs.

Yes, home runs eliminate some of the drama associated with hitting a ball in play — Will the batter stretch a single into a double or a double into a triple? Will the base runner go for third or for home? — but do those in-game aesthetics create more adrenaline or dopamine than the anticipation over whether a well hit ball will go over the fence? I, personally, find it hard to believe that too many home runs are hurting today’s baseball.

But is Epstein right in saying analytics may have played a role in the recent increases in home runs. The answer is an emphatic yes.

As the MLB worked to remove steroids from the game in the late 1990s, the number of home runs per game dropped dramatically…until 2011. As the ‘Analytics era’ has become entrenched in baseball, home runs have increased year-to-year as fast as they did during the heyday of steroids, rising from 0.9 per game per team in 2011 to 1.3 in 2020. In an historical context, professional baseball has never seen as many home runs as it does today.

However, again, in the long-term historical context, the ‘Analytics era’ is just continuing a trend that has existed in baseball since its earliest days. Most batters have always coveted home runs and all pitchers have loathed them — analytics didn’t cause that dynamic.

Figure 5: Trends in Stolen Bases and Home Runs in Major League Baseball since 1871.

The holy grail of baseball analytic metrics is On-base Slugging (OPS) — a comprehensive measure of batter productivity that incorporates more information about how often a batter has multiple base hits.

(and from a defensive perspective, an indicator of how well a team’s pitching and fielding lineup stunts batter productivity).

The highly-regarded OPS is important to baseball analytic gurus because of its strong correlation with the proximal cause of why teams win or lose: The number of runs they score.

Since 1885, the Pearson correlation between OPS and the number of runs per game is 0.56 (which is highly significant at the two-tailed, 0.05 alpha level). And it is on the  OPS metric that the ‘Analytics era’ has made a surprisingly modest impact, hardly large enough to be responsible for harming the popularity of baseball (see Figure 6). If anything, shouldn’t a higher OPS in the aggregate indicate a more exciting type of baseball, even if it includes a larger number of home runs?

Prior to the ‘Analytics era,’ the ‘Steroids era’ (1994 to 2005) witnessed a comparable surge in OPS (and home runs) and the popularity of baseball grew, at least until stories of steroids-use became more prominent in sports media.

Figure 6: Trends in On-base Plus Slugging (OPS) and # of Runs in Major League Baseball since 1871.

Epstein’s pinning baseball’s current troubles on analytics begs the question of what other factors could also be explaining some of the recent changes in the game’s artfulness. These in-game modifications cannot all be dropped at the feet of analytics. The slow pruning out of steroids from the game, shifts in baseball’s young talent pool, the changing tastes of American sports fans, and the growth in other sports entertainment options cannot be ignored.

Final Thoughts

Baseball has real problems, particularly with the new generation of sports fans. The MLB should not under-estimate the negative implications of this problem.

However, the sport is not dying and analytics is not leading it towards a certain death. Analytics did not cause baseball’s systemic problems.

For those who assume major league baseball is a sinking ship, analytics has done little more than re-arrange the deck chairs on the Titanic. However, for those of us who believe baseball is still one of the great forms of sports entertainment, we must admit the sport is dangerously out-of-touch with the modern tastes and appetites of the average American sports fan.

And though analytics may not have helped the sport as much as Moneyball suggested it would, neither has it done the damage Epstein suggests.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Postcript:

This is my favorite scene from Moneyball. It is the point at which head scout Grady Fuson (played by Ken Medlock) confronts Billy Beane (Brad Pitt) over his decision-making style as general manager. Most Moneyball moviegoers (and readers of Lewis’ book) probably view Fuson as the bad guy in the film — a dinosaur unwilling to change with the times. As a statistician whose faced similar confrontations in similar contexts, I see Fuson as a irreplaceable reality check for data wonks who believe hard data trumps experience and intuition. In my career, I found all of those perspectives important.

Fuson asks Beane into the hallway so he can clear the air. Fuson then says to Beane:

“Major League Baseball and its fans would be more than happy to throw you and Google boy under the bus if you keep doing what you’re doing here. You don’t put a team together using a computer.

Baseball isn’t just numbers. It’s not science. If it was, anybody could do what we’re doing, but they can’t because they don’t know what we know. They don’t have our experience and they don’t have our intuition.

You’ve got a kid in there that’s got a degree in economics from Yale and you’ve got a scout here with 29 years of baseball experience.

You’re listening to the wrong one now. There are intangibles that only baseball people understand. You’re discounting what scouts have done for a hundred and fifty years.”

Years later, Fuson would react to how he was portrayed in Lewis’ book and subsequent movie:

“When I was a national cross-checker, I raised my hand numerous times and said, ‘Have you looked at these numbers?’ I had always used numbers. Granted, as the years go on, we’ve got so many more ways of getting numbers. It’s called ‘metrics’ now. And metrics lead to saber-math. Now we have formulas. We have it all now. But historically, I always used numbers. If there’s anything that people perceived right or wrong, it’s that me and Billy are very passionate about what we do. And so when we do speak, the conversation is filled with passion. He even told me when he brought me back, ‘Despite what some people think, I always thought we had healthy, energetic baseball conversations.’”

At times I think people want to believe analytics and professional intuition are mortal enemies. In my experience, one cannot live without the other.

 

Wake up, America! The U.S. is not going bankrupt

[Headline graphic: Photo downloaded flickr.com/photos/68751915@N05/6793826885 (This image is used under the CCA-Share Alike 2.0 Generic license)]

By Kent R. Kroeger (Source: NuQum.com; November 25, 2020)

The recent Twitter exchange by Democratic congresswoman Alexandria Ocasio-Cortez (AOC) and Nikki Haley, the former U.S. Ambassador to the U.N., over whether the U.S. can afford direct payments to U.S. households to help with the economic damage caused by the coronavirus pandemic is a shining example of how our two major political parties can unite in constructive dialogue to solve our nation’s most pressing problems.

I’m just kidding.

The AOC-Haley debate over direct financial aid to Americans suffering due to the pandemic was a poopfest.

AOC and Nikki basically told Joe Biden and his call for national unity to go screw themselves.

The AOC-Haley Twitter exchange lasted only a few snarky tweets:

AOC: To get the (corona)virus under control, we need to pay people to stay home.

Haley: AOC, Are you suggesting you want to pay people to stay home from the money you take by defunding the police? Or was that for the student debts you wanted to pay off, the Green New Deal or Medicare for All? #WhereIsTheMoney

AOC: Nikki, I’m suggesting Republicans find the spine to stand up to their corporate donors & vote for the same measures they did in March, except without the Wall St bailout this time.

And I know you’re confused abt actual governance but police budgets are municipal, not federal.

AOC: Utterly embarrassing that this woman was a governor & still doesn’t have a grasp on public investment. Wonder if she says federal financing works like a piggy bank or household too?

All this faux-seriousness from folks who worship Trump for running the country like his casino.

And if you allow me to judge the winner of the AOC-Haley Twitter spat, AOC won in a first period knockout. It was the 1986 Mike Tyson-Marvis Frazier fight, only with more attractive combatants.

But before I give AOC too much credit, her rebuttal to Haley’s assertion that the U.S. can’t afford to solve America’s most serious problems (i.e., health care costs, student debt, climate change, etc.), realize that AOC is simply repeating an economic argument developed by advocates of Modern Monetary Theory (MMT).

In an over-simplified summary, MMT describes currency as a public monopoly and economic problems such as unemployment as evidence that a currency monopolist is overly restricting the supply of the financial assets needed to pay taxes and satisfy savings desires.

Stony Brook University Professor Stephanie Kelton, a former Bernie Sanders economic advisor, is among MMT’s most visible current advocates.

And, in truth, MMT is not that new. The theory’s core ideas ate not far removed from 100-year-old Chartalist Theory and similar economic arguments have been offered by U.S. economists and bankers for decades.

In a 1993 Harvard Business Review published dialogue, William A. Schreyer, Chairman of the Board Emeritus, Merrill Lynch & Co., Inc., New York, New York, a sharp critic of MMT, still concurred to one of its principle conclusions:

“The federal budget deficit is not the most important threat facing the U.S. economy. When policymakers focus narrowly on the budget deficit, they ignore what truly drives rising prosperity and long-term economic growth, that is, saving and investment. There is real danger in Washington’s myopic fear of the deficit. As we have seen too often in recent years, a focus on deficit-driven government accounting can place growth-oriented economic policy in a straitjacket.”

But there is also no doubt that politicians like Bernie Sanders and AOC are probably most responsible for helping propel MMT-thinking into mainstream political debate.

But before you think I’m a doughy-eyed lefty, think again. I voted for Trump twice, label myself ‘pro-life,’ believe ‘woke’ politics has more to do with politicians finding a new way to milk Americans for more campaign donations than anything else, and remain a healthy skeptic of the doomsday predictions surrounding climate change (though, I am convinced the earth is warming due to human activity and its consequences are real).

AOC would never ask someone like me for my vote.

Nonetheless, AOC’s logic on helping American households deal with the economic consequences of the pandemic is far more coherent than anything Haley or any other major politician have said on the subject. And that includes Nancy Pelosi and Joe Biden.

No country has ever gone bankrupt spending money on solving its most serious problems — and the coronavirus pandemic is that type of problem.

As Prof. Kelton likes to ask, “Did we spend too much money on World War 2?”

All that being said, I remain sensitive to the question: “Can large, long-term federal deficits be a bad thing?”

And my understanding of Prof. Kelton and MMT is that, of course, federal deficits are bad if the government spends the money poorly. Imagine if our $27 trillion national debt had been obtained entirely through the government purchase of solid gold bathtubs for every American street corner. Our currency would be worthless and our economy in a shambles. Buying U.S. debt would be the worst investment option on the planet.

But that’s not how this country has spent the money it has printed. Not even close. Past spending and investment decisions (public and private) have made the U.S. economy the best long-term investment around. Admittedly that could change. A twenty-year military occupation of country with little impact on the world economy or U.S. strategic interests might do that if we give it a chance. But even that questionable investment comes with economic upsides, particularly if you are a military contractor or an MSNBC military “analyst.”

The fact is, simplistic notions of what policies the U.S. can and cannot afford are rooted in, as AOC puts it, a flawed understanding of public investment. Financing the federal deficit is not like a piggy bank or household budget. To treat it as such is to risk doing real and lasting harm to the U.S. economy and the American people.

Therefore, it is time we collectively wake up to the con that the U.S. cannot sustain deficit spending, a deception engineered out of self-interest by politicians from both parties who gain more power by perpetuating it.

The reality is that the U.S. can sustain deficit spending as long as the money is spent wisely and solves real problems.

AOC — 1 … Nikki Haley and the U.S. Political Establishment — 0.

  • K.R.K.

Send comments to: nuqum@protonmail.com