Be part of the solution, not the problem

By Kent R. Kroeger (Source: NuQum.com; January 16, 2021)

Could Donald Trump’s presidency have ended any other way?

What happened at — and, more importantly, in — the U.S. Capitol on January 6th was tragic. People died because an uncontrollable mob formed outside the U.S. Capitol to support a president who, at best, was recklessly naive about what a mass rally like that could turn into; and, at worst, deliberately ignited those flames.

If only Trump instead of me had gotten this fortune cookie and taken it to heart:

“If you win, act like you are used to it. If you lose, act like you love it.” — A fortune cookie

To my Biden-supporting readers, concerned that I am going to defend Trump’s actions leading up to the storming of the U.S. Capitol on January 6th, rest easy. I am not.

Now is not the time to discover the mental gymnastics necessary to excuse a political act — Trump’s rally to “Stop the Steal” — that a child would have realized had the potential to provoke significant violence.

To my Trump-supporting readers, already practicing levels of emotional isolation and self-censorship that can’t possibly be good for your long-term health, you will be spared any self-important, virtue-signaling lecture about the moral righteousness of Republicans “brave” enough to disown Trump or how the GOP’s many latent malignancies were exposed (and exploited) by the Trump presidency.

No, instead, I will use the January 6th debacle to share what I am telling myself so I can help make sure something like that sh*t-carnival never happens again.

For starters…

Now is NOT the time to say, ‘They started it.

For partisan purposes, I will not compare or equate last year’s George Floyd/Black Lives Matter protests in which at least 19 people died and caused approximately $1.5 billion in property damage to the Capitol riot.

Protests turning deadly are not that uncommon in U.S. history, and they’ve been instigated from both the left and right. We’ve before even seen gun violence directed at U.S. House members within the Capitol building itself (1954 Capitol shooting).

But to use the 2021 Capitol riot tragedy to propel the narrative that violence is primarily the domain of the political right is to willfully ignore instances such as the 12 people who died of lead poisoning in Flint, Michigan when a Democrat mayor, a Republican governor, and an oddly passive Environmental Protection Agency under Barack Obama carelessly switched Flint’s water supply in order to save money.

One might say that Flint is a different kind of violence and they’d be right. I think its worse. Its silent. Hard to identify its perpetrator. And even harder to secure justice and restitution.

Or how about the hundreds of mostly brown people U.S. drones and airstrikes kill every year? These military and intelligence actions, uniformly funded by bipartisan votes since the 9/11 attacks, have arguably accomplished little except make the U.S. the world’s most prolific killer of pine nut farmers in Afghanistan.

Whether we acknowledge it, deadly violence is central part of our culture and no political party, ideology, race or ethnicity is immune from being complicit in it.

Now is NOT the time to call other people conspiracy theorists — especially since we are all inclined to be one now and then.

While I emphatically oppose the overuse of mail-in voting (particularly when third parties are allowed to collect and deliver large numbers of completed ballots) on the grounds that it compromises two core principles of sound election system design — timeliness and integrity — it is regrettable that Trump and his subordinates have encouraged his voters to believe the three-headed chimera that the 2020 presidential election was stolen. The evidence simply isn’t there, as hard as they try to find it.

That said, for Democrats or anyone else to call Trump voters “conspiracy theorists” is to turn a blind eye to a four-year Democratic Party and news media project called Russiagate that, in the brutal end, found no evidence of a conspiracy between the 2016 Trump campaign and the Russians to influence the 2016 election. At this point my Democrat friends usually lean in and say something like, “The Mueller investigation found insufficient evidence to indict Trump and his associates on conspiracy charges — read the Mueller report!” At which time I lean in and say, “Read the Mueller report!” There was no evidence of a conspiracy, a term with a distinct legal definitionAn agreement between two or more people to commit an illegal act, along with an intent to achieve the agreement’s goal.

What the Mueller report did do was document: (1) the Trump campaign’s clumsy quest to find Hillary Clinton’s 30,000 deleted emails (George Papadopoulos and Roger Stone), (2) the incoming Trump administration’s opening of a dialogue with a Russian diplomat (Sergey Kislyak) using an Trump administration representative (General Michael Flynn) and (3) the Trump organization’s effort to build a Trump Tower in Moscow. All of those actions were legal — as they should be.

And, yes, I am skeptical that Lee Harvey Oswald acted alone — even as I believe he was the lone gunman. If that makes me a conspiracy theorist, so be it.

Now is NOT the time to shame people for believing that most of our political elites work more for the political donor class than the average American (whoever that is).

I do not believe the data supports the thesis that economic grievances are the primary factor behind Trump’s popularity within the Republican Party. Instead, the evidence says something deeper drives Trump support, more rooted in race, social status, and culture than economics.

Still, the stark realization that our political system is broken binds many Democrat progressives and Trump supporters and has been continually buried over the past four-plus years of anti-Trump media coverage: This country has a political-economic system primarily designed to fulfill the interests of a relatively small number of Americans.

In Democracy in America?: What Has Gone Wrong and What We Can Do About It (University of Chicago Press, 2017), perhaps themost important political science book in the past thirty years, political scientists Benjamin Page and Martin Gilens offer compelling evidence that public policy in the U.S. is best explained by understanding the interests of elites and not those of the average American. In fact, this disconnect is so bad in their view, it is fair to ask if Americans even live in a democracy.

“Our analysis of some 2,000 federal government policy decisions indicates that when you take account of what affluent Americans, corporations and organized interest groups want, ordinary citizens have little or no independent influence at all,” Page and Gilens said in a Washington Post interview while promoting their book. “The wealthy, corporations and organized interest groups have substantial influence. But the estimated influence of the public is statistically indistinguishable from zero.”

“This has real consequences. Millions of Americans are denied government help with jobs, incomes, health care or retirement pensions. They do not get action against climate change or stricter regulation of the financial sector or a tax system that asks the wealthy to pay a fair share. On all these issues, wealthy Americans tend to want very different things than average Americans do. And the wealthy usually win.”

And while Page and Gilen’s research rightfully has methodological detractors, the most direct statistical indicator of its validity — wealth inequality —has been growing steadily in the U.S. since 1990, with a few temporary pauses during the Clinton administration, the 2008 worldwide financial crisis, and the Trump administration (yes, you read that right).

Image for post
Source: St. Louis Federal Reserve

Only the disproportionate amount of the coronavirus pandemic relief money going to corporate bank accounts has put the wealthiest 1-percent back near their Obama administration highs.

So while Trump supporters don’t always marshal the best evidence-based critiques of the American political system, with a little more effort and the help of better leaders it wouldn’t be hard for them to do so.

Now is NOT the time to reduce three-fifths of our population down to words like ‘fascist’ and ‘racist.’

Are there racist Republicans? Of course there are — around 45 percent among white Republican voters, according to my analysis of the 2018 American National Election Study (Pilot). That same analysis, which used a measure of racial bias common in social science literature, found 20 percent of white Democrat voters have a more favorable view of their race relative to African-Americans and/or Hispanics. Any assumption that racism is unique or in a more toxic form among Trump supporters is challenged by the evidence.

Now IS the time for cooler heads to prevail, which eliminates almost anyone appearing on the major cable news networks in the past two weeks.

The national news media profits from the use of exaggeration and hyperbole. That can never be discounted when talking about events such as what happened January 6th.

Here is how Google searches on the term ‘coup d’état’ was affected by the Capitol riot:

Image for post
Source: Google Trends

I confess I was not horrified watching live on social media as Trump supporters forced their way into the Capitol. I was shocked, but not horrified. A small semantic difference, but an important one. At no point did I think I was watching an ongoing coup d’état.

But for my family and friends that watched the mob unfold on the major cable news networks, they thought an actual coup d’état was in motion — that this mob was viably attempting to stop the electoral college vote, overturn the 2020 election, and keep Trump in the presidency.

Where the news media has an obligation to discern fact from fantasy, they did the exact opposite on January 16th. They, in fact, helped fan the spread of disinformation coming out of news reports from inside the Capitol.

As disconcerting as the scene was on January 6th, there is a chasm-sized difference between Facebook chuckle heads causing a deadly riot and a credible attempt to take over the U.S. government.

This is how journalist Michael Tracey described the Capitol riot and the media’s predilection for hyperbole while reporting on it:

“Is it unusual for a mob to breach the Capitol Building — ransacking offices, taking goofy selfies, and disrupting the proceedings of Congress for a few hours? Yes, that’s unusual. But the idea that this was a real attempt at a “coup” — meaning an attempt to seize by force the reins of the most powerful state in world history — is so preposterous that you really have to be a special kind of deluded in order to believe it. Or if not deluded, you have to believe that using such terminology serves some other political purpose. Such as, perhaps, imposing even more stringent censorship on social media, where the “coup” is reported to have been organized. Or inflicting punishment on the man who is accused of “inciting” the coup, which you’ve spent four years desperately craving to do anyway.

Journalists and pundits, glorying in their natural state — which is to peddle as much free-flowing hysteria as possible — eagerly invoke all the same rhetoric that they’d abhor in other circumstances on civil libertarian grounds. “Domestic terrorism,” “insurrection,” and other such terms now being promoted by the corporate media will nicely advance the upcoming project of “making sure something like this never happens again.” Use your imagination as to what kind of remedial measures that will entail.

Trump’s promotion of election fraud fantasies has been a disaster not just for him, but for his “movement” — such as it exists — and it’s obvious that a large segment of the population actively wants to be deceived about such matters. But the notion that Trump has “incited” a violent insurrection is laughable. His speech Monday afternoon that preceded the march to the Capitol was another standard-fare Trump grievance fest, except without the humor that used to make them kind of entertaining.”

This is not a semantic debate. What happened on January 6th was not a credible coup attempt, despite verbal goading from a large number of the mob suggesting as much and notwithstanding Senator Ted Cruz’ poorly-timed fundraising tweet that some construed (falsely) as his attempt to lead the nascent rebellion.

Still, do not confuse my words with an exoneration of Trump’s role in the Capitol riot. To the contrary, time and contemplation has led to me to conclude Trump is wholly responsible for the deadly acts conducted (literally) under banner’s displaying his name, regardless of the fact his speech on that morning did not directly call for a violent insurrection. In truth, he explicitly said the opposite: “I know that everyone here will soon be marching over to the Capitol building to peacefully and patriotically make your voices heard.”

Nonetheless, he had to know the potential was there and it was his job to lead at that moment. He didn’t.

Now IS the time to encourage more dialogue, not less — and that means fewer “Hitler” and “Communist” references (my subsequent references notwithstanding).

Along with Page and Gilen’s book on our democracy’s policy dysfunction, another influential book for me has been Yale historian Timothy Snyder’s On Tyranny: Twenty Lessons from the Twentieth Century (Tim Duggan Books, 2017). In it he uses historical examples to explain how governments use tragedies and crises to increase their control over society (and not usually for the common good).

For example, weeks after Adolf Hitler was made Chancellor of Germany, he used the Reichstag fire on February 27, 1933, to issue The Reichstag Fire Decree which suspended most civil liberties in Germany, including freedom of the press and the right of public assembly.

“A week later, the Nazi party, having claimed that the fire was the beginning of a major terror campaign by the Left, won a decisive victory in parliamentary elections,” says Snyder. “The Reichstag fire shows how quickly a modern republic can be transformed into an authoritarian regime. There is nothing new, to be sure, in the politics of exception.”

It would be reductio ad absurdum to use Hitler’s shutting down of Communist newspapers as the forewarning to a future U.S. dictatorship caused by Twitter banning Trump. Our democracy can survive Trump’s Twitter ban. At the same time, our democracy isn’t stronger for it.

Conservative voices are now systematically targeted for censorship, as described in journalist Glenn Greenwald’s (not a conservative) recent Twitter salvo:

Final Thoughts

Today, because of what happened on January 6th, the U.S. is not as free as it was even a month ago, and it is fruitless to blame one person, a group of people, the news media or a political party for this outcome. We have all contributed in a tiny way by isolating ourselves in self-selected information bubbles that keep us as far away as humanly possible from challenging and unpleasant thoughts. [For example, I spend 99 percent of my social media time watching Nerdrotic and Doomcock torch Disney, CBS and the BBC for destroying my favorite science fiction franchises: Star Wars, Star Trek and Doctor Who.]

A few days ago I chatted with a neighbor who continues to keep his badly dog-eared, F-150-sized Trump sign in his front yard. He talked weather, sports, and movies. Not a word on politics. I wanted to, but knew not to push it. If he had mentioned the current political situation, I would have offered this observation:

Political parties on the rise always overplay their hand. How else can you explain how the Democrats, facing an historically unpopular incumbent president — during a deep, pandemic-caused recession— could still lose seats in U.S. House elections? Republicans are one midterm election away from regaining the House of Representatives and the two years until the next congressional election is a political eternity.

The Republicans will learn from the 2021 Capitol riot.

As for the Democrats, I would just suggest this fortune cookie wisdom:

Image for post

Actually, that is wisdom for all of us.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The status quo is back — expect them to cry about the budget deficit

By Kent R. Kroeger (January 21, 2021)

Political scientist Harold Lasswell (1902–1978) said politics is about ‘who gets what, when and how.’

He wrote it in 1936, but his words are more relevant than ever.

In the U.S., his definition is actualized in Article I, Section 8 of the U.S. Constitution:

The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defense and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;

To borrow Money on the credit of the United States;

To regulate Commerce with foreign Nations, and among the several states, and with the Indian Tribes;

To establish an uniform Rule of Naturalization, and uniform Laws on the subject of Bankruptcies throughout the United States;

To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures.

In short, the U.S. Congress has the authority to create money — which they’ve done in ex cathedra abundance in the post-World War II era.

According to the U.S. Federal Reserve, the U.S. total public debt is 127 percent of gross domestic product (or roughly $27 trillion) — a level unseen in U.S. history (see Figure 1).

Figure 1: Total U.S. public debt as a percent of gross domestic product (GDP)

Image for post
Source: St. Louis Federal Reserve

And who owns most of the U.S. debt? Not China. Not Germany. Not Japan. Not the U.K. It is Americans who own roughly 70 percent of the U.S. federal debt.

Its like owing money to your family — and if you’ve ever had that weight hanging over your head, you might prefer owing the money to the Chinese.

When it comes to dishing out goodies, the U.S. Congress makes Santa Claus look like a hack.

But, unlike Saint Nick, Congress doesn’t print and give money to just anyone who’s been good— Congress plays favorites. About 70 percent goes to mandatory spending, composed of interest payments on the debt (10%), Social Security (23%), Medicare/Medicaid (23%), and other social programs (14%). As for the other 30 percent of government spending, called discretionary spending, 51 percent goes to the Department of Defense.

That leaves about three trillion dollars annually to allocate for the remaining discretionary expenditures. To that end, the Congress could just hand each of us (including children) $9,000, but that is crazy talk. Instead, we have federal spending targeted towards education, training, transportation, veteran benefits, health, income security, and the basic maintenance of government.

There was a time when three trillion dollars was a lot of money — and maybe it still is — but it is amazing how quickly that amount of money can be spent with the drop of a House gavel and a presidential signature.

The Coronavirus Aid, Relief, and Economic Security Actpassed by the U.S. Congress and signed into law by President Donald Trump on March 27, 2020, costed out at $2.2 trillion, with about $560 billion going to individual Americans and the remainder to businesses and state or local governments.

That is a lot of money…all of it debt-financed. And the largest share of it went directly to the bank accounts of corporate America.

And what do traditional economists tell us about the potential impact of this new (and old) federal debt? Their collective warning goes something like this:

U.S. deficits are partially financed through the sale of government securities (such as T-bonds) to individuals, businesses and other governments. The practical impact is that this money is drawn from financial reserves that could have been used for business investment, thereby reducing the potential capital stock in the economy.

Furthermore, due to their reputation as safe investments, the sale of government securities can impact interest rates when they force other types of financial assets to pay interest rates high enough to attract investors away from government securities.

Finally, the Federal Reserve can inject money into the economy either by directly printing money or through central bank purchases of government bonds, such as the quantitative easing (QE) policies implemented in response to the 2008 worldwide financial crisis. The economic danger in these cases, according to economists, is inflation (i.e., too much money chasing too few goods).

How does reality match with economic theory?

I am not an economist and don’t pretend to have mastered all of the quantitative literature surrounding the relationship between federal debt, inflation and interest rates, but here is what the raw data tells me: If there is a relationship, it is far from obvious (see Figure 2).

Figure 2: The Relationship between Federal Debt, Inflation and Interest Rates

Image for post
Source: St. Louis Federal Reserve

Despite a growing federal debt, which has gone from just 35 percent of GDP in the mid-1970s to over 100 percent of GDP following the 2008 worldwide financial crisis (blue line), interest rates and annual inflation rates have fallen over that same period. Unless there is a 30-year lag, there is no clear long-term relationship between federal deficits and interest rates or inflation. If anything, the post-World War II relationship has been negative.

Given mainstream economic theory, how is that possible?

The possible explanations are varied and complex, but among the reasons for continued low inflation and interest rates, despite large and ongoing federal deficits, is an abundant labor supply, premature monetary tightening by the Federal Reserve (keeping the U.S. below full employment), globalization, and technological (productivity) advances.

Nonetheless, the longer interest rates and inflation stay subdued amidst a fast growing federal debt, it becomes increasingly likely heterodox macroeconomic theories — such as Modern Monetary Theory (MMT) — will grow in popularity among economists. At some point, consensus economic theory must catch up to the facts on the ground.

What is MMT?

Investopedia’s Deborah D’Souza offers a concise explanation:

Modern Monetary Theory says monetarily sovereign countries like the U.S., U.K., Japan, and Canada, which spend, tax, and borrow in a fiat currency they fully control, are not operationally constrained by revenues when it comes to federal government spending.

Put simply, such governments do not rely on taxes or borrowing for spending since they can print as much as they need and are the monopoly issuers of the currency. Since their budgets aren’t like a regular household’s, their policies should not be shaped by fears of rising national debt.

MMT challenges conventional beliefs about the way the government interacts with the economy, the nature of money, the use of taxes, and the significance of budget deficits. These beliefs, MMT advocates say, are a hangover from the gold standard era and are no longer accurate, useful, or necessary.

More importantly, these old Keynesian arguments — empirically tenuous, in my opinion — needlessly restrict the range of policy ideas considered to address national problems such as universal access to health care, growing student debt and climate change. [Thank God we didn’t get overly worried about the federal debt when we were fighting the Axis in World War II!]

Progressive New York congresswoman Alexandria Ocasio-Cortez has consistently shown an understanding of MMT’s key tenets. When asked by CNN’s Chris Cuomo how she would pay for the social programs she wants to pass, her answer was simple (and I paraphrase): The federal government can pay for Medicare-for-All, student debt forgiveness, and the Green New Deal the same way it pays for a nearly trillion dollar annual defense budgetjust print the money.

In fact, that is essentially what this country has done since President Lyndon Johnson decided to prosecute a war in Southeast Asia at the same time he launched the largest set of new social programs since the New Deal.

Such assertions, however, generate scorn from status quo-anchored political and media elites, who are now telling the incoming Biden administration that the money isn’t there to offer Americans the $2,000 coronavirus relief checks promised by Joe Biden as recently as January 14th. [I’ll bet the farm I don’t own that these $2,000 relief checks will never happen.]

Cue the journalistic beacon of the economic status quo — The Wall Street Journal — which plastered this headline above the front page fold in its January 19th edition: Janet Yellen’s Debt Burden: $21.6 Trillion and Growing

WSJ writers Kate Davidson and Jon Hilsenrath correctly point out that the incoming U.S. Treasury secretary, Yellen, was the Chairwoman of the Clinton administration’s White House Council of Economic Advisers and among its most prominent budget deficit hawks, and offer this warning: “The Biden administration will now contend with progressives who want even more spending, and conservatives who say the government is tempting fate by adding to its swollen balance sheet.”

This misrepresentation of the federal debt’s true nature is precisely what MMT advocates are trying to fight, who note that when Congress spends money, the U.S. Treasury creates a debit from its operating account (through the Federal Reserve) and deposits this Congress-sanctioned new money into private bank accounts and the commercial banking sector. In other words, the federal debt boosts private savings — which, according to MMT advocates, is a good thing when the “debt” addresses any slack (i.e., unused economic resources) in the economy.

Regardless of MMT’s validity, this heterodox theory reminds us of how poorly mainstream economic thinking describes the relationship between federal spending and the economy. From what I’ve seen after 40 years of watching politicians warn about the impending ‘economic meltdown’ caused by our growing national debt, consensus economic theory seems more a tool for politicians to scold each other (and their constituents) about the importance of the government paying its bills than it is a genuine way to understand how the U.S. economy works.

Yet, I think everyone can agree on this: Money doesn’t grow on trees, it grows on Capitol Hill. And as the U.S. total public debt has grown, so have the U.S. economy and wealth inequality — which are intricately interconnected through, as Lasswell described 85 years ago, a Congress (and president) who decide ‘who gets what, when and how.’

  • K.R.K.

Send comments and your economic theories to: nuqum@protonmail.com

Beadle (the Data Crunching Robot) Predicts the NFL Playoffs

By Kent R. Kroeger (Source: NuQum.com; January 15, 2021)

Beadle (the Data Crunching Robot); Photo by Hello Robotics (Used under the Creative Commons Attribution-Share Alike 4.0 International license)

Since we are a mere 24 hours away from the start of the NFL Divisional Round playoffs, I will dispense with any long-winded explanation of how my data loving robot (Beadle) came up with her predictions for those games.

Suffice it to say, despite her Bayesian roots, Beadle is rather lazy statistician who typically eschews the rigors and challenges associated with building statistical models from scratch for the convenience of cribbing off the work of others.

Why do all that work when you can have others do it for you?

There is no better arena to award Beadle’s sluggardness than predicting NFL football games, as there are literally hundreds of statisticians, data modelers and highly-motivated gamblers who publicly share their methodologies and resultant game predictions for all to see.

Why reinvent the wheel?

With this frame-of-mind, Beadle has all season long been scanning the Web for these game predictions and quietly noting those data analysts with the best prediction track records. Oh, heck, who am I kidding? Beadle stopped doing that about four weeks into the season.

What was the point? It was obvious from the beginning that all, not most, but ALL of these prediction models use mostly the same variables and statistical modeling techniques and, voilà, come up with mostly the same predictions.

FiveThirtyEight’s prediction model predicted back in September that the Kansas City Chiefs would win this year’s Super Bowl over the New Orleans Saints. And so did about 538 other prediction models.

Why? Because they are all using the same data inputs and whatever variation in methods they employ to crunch that data (e.g., Bayesians versus Frequentists) is not different enough to substantively change model predictions.

But what if the Chiefs are that good? Shouldn’t the models reflect that reality?

And it can never be forgotten that these NFL prediction models face a highly dynamic environment where quarterbacks and other key players can get injured over the course of a season, fundamentally changing a team prospects — a fact FiveThirtyEight’s model accounts for with respect to QBs — and the reason for which preseason model predictions (and Vegas betting lines) need to be updated from week-to-week.

Beadle and I are not negative towards statistical prediction models. To the contrary, given the infinitely complex contexts in which they are asked to make judgments, we couldn’t be more in awe of the fact that many of them are very predictive.

Before I share Beadle’s predictions for the NFL Divisional Round, I should extend thanks to these eight analytic websites that shared their data and methodologies: teamrankings.com, ESPN’s Football Power Index, sagarin.com, masseyratings.com, thepowerrank.com, ff-winners.com, powerrankingsguru.com, and simmonsratings.com.

It is from these prediction models that Beadle aggregated their NFL team scores to generate her own game predictions.

Beadle’s Predictions for the NFL Divisional Playoffs

Without any further adieu, here is how Beadle ranks the remaining NFL playoff teams on her Average Power Index (API), which is merely each team’s standardized (z-score) after averaging the index scores for the eight prediction models:

Analysis by Kent R. Kroeger (NuQum.com)

And from those API values, Beadle makes the following game predictions (including point spreads and scores) through the Super Bowl:

No surprise: Beadle predicts the Kansas City Chiefs will win the Super Bowl in a close game with the New Orleans Saints.

But you didn’t need Beadle to tell you that. FiveThirtyEight.com made that similar prediction five months ago.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The data do not support the Miami Dolphins bailing on Tua Tagovailoa

[Headline photo: Two cheerleaders for the Miami Dolphins football team (Photo by Jonathan Skaines; Used under the CCA-Share Alike 2.0 Generic license.)]

First, an apology to my wife. The above photo was the one of the few Miami Dolphin-related public copyright photos I could find on short notice. It should not be regarded, however, as an endorsement of fake smiles.

Now, to the issue at hand…

Alabama’s Tua Tagovailoa was the fifth overall pick and second quarterback taken in the 2020 National Football League (NFL) draft.

Drafted by the Miami Dolphins, Tagovailoa was drafted behind Heisman winner Joe Burrow (QB — Cincinnati Bengals) and Ohio State’s Chase Young (DE — Washington Sea Dogs) and was one of four quarterbacks selected in the first round. San Diego took the third quarterback, Oregon’s Justin Herbert, as the sixth overall pick and Green Bay— mysteriously — thought Utah State’s Jordan Love, the 26th overall pick and fourth quarterback taken, was that final piece needed for the Aaron Rodgers-led Packers to win another Super Bowl (…and, even more mysteriously, Love’s clipboard-holding skills seem to be what the Cheeseheads needed this season).

Normally when an NFL team drafts a quarterback as high as fifth, they give him at least a few years to earn his first round contract. The Tampa Bay Buccaneers gave first overall pick Jameis Winston five years, as did the Tennessee Titans with second overall pick Marcus Mariota. Sam Bradford and Mark Sanchez were offered four years to prove their value to their respective teams, the St. Louis Rams and New York Jets. The oft-injured Robert Griffin III — the Washington Federals’ second pick in the 2012 draft —had three years. Even purple drank jugging rumors didn’t stop JaMarcus Russell from getting two solid years of opportunity from the Oakland Raiders.

And, keep in mind, the Dolphins’ 2012 first round pick — and current Titans quarterback — Ryan Tannehill gave the team six mediocre seasons before they jettisoned him in 2019. The Dolphins were patient with Tannehill — who has turned into a high-quality quarterback — so why not with Tagovailoa?

While being impatient with his head coaches, having had six since buying the team in 2008, Dolphins owner Stephen M. Ross, who famously said “there’s a lot of good and I believe there’s a lot of bad” regarding his friend President Donald Trump, has a low-profile personality and is not known for creating drama.

Yet, if he allows his football team’s brain trust to draft another quarterback in the first round, he will get more than drama, he will completely undercut the already fragile confidence of his current starter in Tagovailoa.

So why are a significant number of NFL draft experts seriously recommending the Dolphins use their third pick in the 2021 draft on another quarterback? Writing for ESPN, three out of seven experts said the Dolphins should use their pick on another quarterback:

Jeremy Fowler, national NFL writer: Quarterback. Key word is “address.” Miami needs to thoroughly evaluate the top quarterbacks in the draft, then weigh the pros and cons of not taking one and sticking with Tagovailoa as the unquestioned starter. Miami owes it to its fans and organization to at least do that. This is the one position where a surplus isn’t a bad thing. Keep drafting passers high if necessary. Tua might be the guy regardless. And if the Dolphins decide he’s better than Zach Wilson or Justin Fields or Trey Lance, then grab the offensive tackle or playmaking receiver Miami needs around him.

Mike Clay, fantasy football writer:: Quarterback. You don’t have to agree with me on this, but I’ve always been in the camp of “If you’re not sure you have a franchise quarterback, you don’t have a franchise quarterback.” From my perspective, we don’t know whether Tua Tagovailoa is the answer, as he didn’t look the part and was benched multiple times as a rookie. Miami’s future looks bright after a 10-win season in Brian Flores’ second campaign, so it’s unlikely this franchise will be picking in the top five again anytime soon. If they aren’t convinced Tua is the franchise quarterback, they need to avoid sunk-cost fallacy and a trip to long-term quarterback purgatory.

Seth Walder: Quarterback. Tagovailoa still might pan out, but quarterback is too important for Miami to put all of its eggs in that basket, especially after he finished 26th in QBR and clearly did not earn complete trust from the coaching staff. Take a shot at whichever of the top three quarterbacks is left on the board while keeping Tagovailoa, at least for now. That way, Miami can maximize its chances of finding its franchise QB.

And the question must be asked, why? Has Tagovailoa grossly under-performed? If Miami drafts another quarterback just a year after getting Tagovailoa, the only conclusion one can make is that the Dolphins consider him a bust, but with only a year under his belt is that even possible to know?

Before assessing Tagovailoa’s performance in his rookie season, we should consider the possible comparisons. The first comparison is the most obvious: compare Tagovailoa to other quarterback’s first significant playing year (which I define as a quarterback’s first year with at least three starts and 50 or more pass attempts — admittedly, this is a low threshold).

Also, for comparability sake, I’ve decided here to only compare quarterbacks drafted in the first round since 2005, the year in which www.pro-football-reference.com starts computing ESPN’s Total QBR Index (QBR) for quarterbacks. While other quarterback metrics have been posited as better measures of quarterback quality — passer rating, adjusted net yards per pass attempt — none are perfect as they don’t directly account for the style of a team’s offense, the quality of a team’s personnel, and the quality of the defense, all of which play a significant role in how a quarterback plays. In the end, I went with the statistic that best predicts wins: ESPN’S QBR.

[I should add that while the QBR does not consider the strength-of-schedule (SoS) faced by a quarterback, it is easily computed and nicely demonstrated in a past analysis by Chase Stuart on footballperspective.com. In a follow-up to this essay, I will incorporate SoS information into player performance metrics for the 2020 season.]

The second comparison is Tagovailoa’s from game-to-game. Did he improve? And the final comparison is the value of the QBR itself. By design, ESPN’s QBR is an approximate objective standard by which to judge quarterbacks: QBR’s exceeding 50 represent above-average quarterbacks when compared to all quarterbacks since 2006.

I will dispense with the last comparison first: Tagovailoa’s rookie year QBR, based on nine starts, 290 pass attempts, a 64.1 percent completion rate and 11 touchdown passes against five interceptions is an above-average 52.9 (which puts him at 26th out of 35 quarterbacks for whom the QBR was computed).

Well, on this comparison at least, Tagovailoa does not stand out in a positive way. But perhaps his performance improved over the season? Hard to say. His first start in Week 8 against the Los Angeles Rams — the NFL’s best passing defense — led to a 29.3 QBR, and over his next eight starts he achieved QBRs over 60 against the Arizona Cardinals (Week 9, QBR 87.3), the Los Angeles Chargers (Week 10, QBR 66.5), the Cincinnati Bengals (Week 13, QBR 74.5) and the Las Vegas Raiders (Week 16, QBR 64.4). Conversely, he struggled against the Denver Broncos (Week 11, QBR 22.9), the Kansas City Chiefs (Week 14, QBR 30.2). and the Buffalo Bills (Week 17, QBR 23.3) — all good passing defenses.

After these first two comparisons, it is hard to decide if Tagovailoa is going to be Miami’s franchise quarterback for the future. As with almost any rookie quarterback, there are positives and negatives, and neither overwhelms the other in Tagovailoa’s case.

However, in our final comparison, I believe Tagovailoa has more than proven it is far too soon for the Dolphins to spend a Top 3 draft choice on another quarterback.

First, we should look at the season-to-season QBRs of quarterbacks who are arguably “franchise” quarterbacks and who were picked in the first round (see Figure 1 below). And if you don’t consider Kyler Murray, Ryan Tannehill, Baker Mayfield or Jared Goff franchise quarterbacks, check in with me in a couple of years. All four are currently in a good, mid-career trajectory by historical standards.

Figure 1: Season-to-Season QBRs for NFL “franchise” Quarterbacks Selected in the 1st Round since 2005

Image for post
Data Source: www.pro-football-reference.com

Three things jump out to me from Figure 1: (1) Franchise quarterbacks rarely have seasons with dismal overall QBRs (<40), (2) Aaron Rodgers really is that great, and (3) Patrick Mahomes, still early in his career, is already in the QBR stratosphere (…and he almost has nowhere to go but down).

How does Tagovailoa compare to my selection of franchise quarterbacks and non-franchise quarterbacks, as well as the other quarterbacks in the 2020 first round draft class (Joe Burrow and Justin Herbert)? As it turns out, pretty good (see Figure 2).

As for the non-franchise quarterbacks, my most controversial assignments are Cam Newton and Joe Flacco. I’m welcome to counter-arguments, but their inclusion in either group does not change the basic conclusion from Figure 2 with respect to Tagovailoa.

Figure 2: Season-to-Season QBRs for NFL “franchise” & “non-franchise” Quarterbacks Selected in the 1st Round since 2005

Image for post
Data Source: www.pro-football-reference.com

In comparison to the other quarterbacks and their first substantive year in the NFL, Tagovailoa’s 2020 QBR is slightly below the average for franchise quarterbacks (52.9 versus 54.6, respectively), and is significantly higher than for non-franchise quarterbacks (52.9 versus 46.1, respectively).

Among his 2020 draft peers, Tagovailoa’s QBR is comparable to Burrow’s (who missed six games due to a season-ending injury), but a far cry from Herbert’s (QBR = 69.7), who is already showing clear signs of super stardom ahead.

Experts are happy to debate whether Tagovailoa has the ability to “throw guys open,” or whether the level of receiver talent he had at Alabama masked his deficiencies. He may well never be a franchise quarterback by any common understanding of the category.

But given his performance in his rookie campaign and how it compares to other quarterbacks, it is unfathomable to me that the Dolphins could entertain even the slightest thought of drafting a quarterback in the 2021 draft. I hope they are not and it is merely some ESPN talking heads with that wild hair up their asses.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Why opinion journalists are sometimes bad at their job (including myself)

[Headline graphic by Dan Murrell; Data source: RottenTomatoes.com]

By Kent R. Kroeger (Source: NuQum.com; January 4, 2020)

Opinion journalists, such as movie critics, bring biases to every opinion they hold and complete objectivity is an ideal few, if any, attain.

The scientific literature on this trait common to all humans, not just opinion journalists, is vast and well-established. The lenses through which we interact with the world are multilayered and varied, each of us with our own unique configuration.

The science tells us we tend to overestimate our own knowledge while underestimating the knowledge of others (“Lake Wobegon effect“); we tend to believe an idea that has been repeated to us multiple times or is easy to understand, regardless of its actual veracity (“illusory truth effect“); we overestimate the importance recent information over historic information (“recency effect“); we offer our opinions to others that will be viewed more favorably by them and often suppress our unpopular opinions (“social desirability bias“); and perhaps the most dangerous bias of all: confirmation bias — our inclination to search for, process and remember information that confirms our preconceptions to the exclusion of information that might challenge them.

But nowhere are  human biases more socially destructive than when opinion journalists project onto others the motivations for their personal opinions and actions. It is often called the illusion of transparency and it occurs when we overestimate our own ability to understand what drives someone else’s opinions and behaviors. [The other side of that same bias occurs when we overestimate the ability of others to know our own motivations.]

The illusion of transparency often leads to fundamental attribution errors in which the explanations for the opinions and behaviors of others is falsely reduced to psychological and personality-based factors (“racist,” “sexist,” “lazy,” “stupid,” etc.).

In combination with intergroup bias — which takes the illusion of transparency to the group level and causes members of a group to give preferential treatment to their own group, often leading to a group’s intellectual atrophy as they make it difficult for new ideas to be introduced into the group — this tendency to falsely infer the motives of others can create systematic, group-level misunderstandings, leading potentially to violent social conflicts.

Judge not, that ye be not judged (Matthew 7:1-3 KJV)

I know something of these biases as I engage in when I write, including in my last opinion essay about the unusual proportion of male movie critics that gave Wonder Woman 1984 (WW84) a positive review (“Are movie critics journalists?“). Though having never met one of these male movie critics, I still felt comfortable attributing their positive reviews to WW84 as a product of being handpicked by WW84’s movie studio (Warner Bros.) for early access to the movie, along with their desire to “please their editors and audience” (a presumed manifestation of the social desirability bias) and other career motives.

Was I right? I offered little evidence beyond mere conjecture as to why the few early negative reviews for WW84 came almost entirely from female movie critics (I basically said liberal men are “useless cowards“). For that I am regretful. I can do better.

Yet, I still believe there was a clear bias among some movie critics in favor of WW84 for reasons unrelated to the actual quality of the movie. How is it possible that, out of the 19 male movie critics in Rotten Tomatoes’ “Top Critics” list who reviewed WW84 in the first two days of its Dec. 15th pre-release, not one gave WW84 a bad review. Not one.

If we assume the reviews were independent of one another and that the actual quality of WW84 warranted 80 percent positive reviews (an assumption purely for argument’s sake), then the probability that we’d get 19 consecutive positive reviews from the top male movie critics is a mere 1.4 percent ( = 0.8^19). If we use WW84’s current Rotten Tomatoes score among all critics of 60 percent as our assumption, that probability goes to near zero.

I can only draw one conclusion: Early reviews by the top male critics were excessively positive for WW84.

As to why this happened, be my guest with your own theories and hypotheses. Do I think Warner Bros. paid for good WW84 reviews? That is the typical straw man argument Hollywood journalists like to use to discredit critics of entertainment journalism. I have no evidence of money changing hands between Warner Bros. and selected movie critics and  I have never suggested as much.

Do I think editors, peer pressure, and even the general public mood weigh heavily on movie critic reviews? Absolutely, yes, and scientific evidence in other social contexts suggest this is likely the case.

Which is why when I read other journalists and movie critics suggest that negative WW84 reviews are motivated by deep-rooted sexism, I cry, “Foul!”

No, critics of “Wonder Woman 1984” are not sexist

In a recent article for Forbes, movie critics and screenwriter Mark Hughes concludes that much of the criticism of WW84, especially from male critics, is motivated by nothing less than sexism. He writes:

Questions of the film’s tone and action sequences are frankly of little interest to me, since most of the same folks offering up those complaints were eager to praise the silliness of many other superhero films. One day it’s “these films take themselves too seriously,” and the next it’s “this film is silly and should take itself more seriously.” Wash, rinse, repeat as necessary (or as clicks and payday necessitate).

Likewise, when men helm films we see far more willingness to weigh “that which works” as more important than “that which doesn’t work,” and allow them room to come back later and impress us. A woman, though? Not so much, as Patty Jenkins has been personally insulted and condemned by voices declaring Wonder Woman 1984 an inexcusable offense to humanity. If you think I’m being hyperbolic about the accusations hurled against the film and its defenders, go look around social media and press coverage for 30 seconds, and then come back to finish this article…

In other words, according to Hughes, we don’t have to be conscious of our deeply ingrained, latent sexism to be subject to its power. Merely disliking a movie directed by a woman proves its existence.

Let me start by noting that many of the male (and female) movie critics that did not like WW84, gave glowing reviews for director Patty Jenkins’ first Wonder Woman movie in 2017.  Chris Stuckmann is as good an example as any in the flaw of Hughes’ sexism charge: Stuckmann’s 2017 Wonder Woman review. His WW84 review.

Did Stuckmann’s latent sexism only kick-in after 2017? Of course not. The more likely explanation is that Stuckmann realizes Wonder Woman (2017) is a very good movie and WW84 is not.

But since Hughes is carelessly willing to suggest critics like Stuckmann are driven by subconscious sexist tendencies when they review movies by female directors, let me conjecture that Hughes had a much more powerful motivation for giving WW84 a good review.

Hughes is a screenwriter (as well as being a movie critic) and one of the well-known attributes of Hollywood culture is that directors, writers, and actors do not publicly like to piss on someone else’s work. It can be career suicide, particularly when that person directed one of the best movies of 2017 (Wonder Woman) and is widely admired within the industry. Even if sexism is alive and well in Hollywood (and I have no doubt that it is), by virtue alone of having helmed two great movies in her young career — Monster (2003) and Wonder Woman (2017) — Jenkins possesses real power by any Hollywood standard.

That Hughes liked WW84 is not surprising. I would have been stunned if Hughes hadn’t.

My complaint about Hughes’ recent Forbes article chastising the “harsher” critics of WW84 is not that Hughes thought WW84 was a good film. That Hughes appreciated the positive themes in WW84 enough to overlook the movie’s obvious flaws is truly OK. [My family, myself notwithstanding, loved the movie.] I’ve loved many movies that, objectively, were rather bad (Nicolas Cage in The Wicker Man comes to mind).

My problem with Hughes (and, unfortunately, far too many writers and journalists at present) is that he throws around psychological theories and personal accusations without a shred of empirical evidence.

Hughes doesn’t know the motivations for why someone writes a critical review any more than I do.

But Hughes takes it one step farther. He implies there’s a dark, antisocial aspect to someone who doesn’t like WW84. He asks: “Do you look at the world around you and decide we need LESS storytelling that appeals to our idealism and posits a world in which grace and mercy are transformative, in which people can look at the truth and make a choice in that moment to try to be better?”

No, Mr. Hughes, I do not think we need LESS storytelling that appeals to our idealism and better angels. But I believe we need MORE GOOD storytelling that does that. Unfortunately,  in my opinion, WW84  does not meet that standard. Furthermore, when Hollywood and our entertainment industry does it poorly, I fear it risks generating higher levels of cynicism towards the very ideals you (and I) endorse.

As one of my government bosses once said as he scolded me, “Kent, good intentions don’t matter. I want results.”

I think that dictum applies to Hollywood movies too.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

Are movie critics journalists? Reviews for “Wonder Woman 1984” suggest many are not.

[Headline photo: Gal Gadot speaking at the 2016 San Diego Comic Con International, in San Diego, California (Photo by Gage Skidmore; used under CCA-Share Alike 2.0 Generic license.]

By Kent R. Kroeger (Source: NuQum.com; December 31, 2020)

A friend of mine from graduate school — whose opinions I trusted, particularly when it came to movies and popular culture (for example, he introduced me to South Park)— shocked me one day when he told me he hated The Godfather.

How can someone who loves movies hate The Godfather?! How could someone so well-informed — he is today a recognized expert in the role and social importance of myth-making —be so utterly wrong?

The answer is quite simple: Danny prided himself on being a critic and he had a genuine problem with The Godfather, particularly the acting and dramatic pacing. [A similarly harsh critique of The Godfather was written in 1972 by Stanley Kauffmann of The New Republic.]

The reality is, thoughtful people can have dramatic differences in opinion, especially when it comes to things as subjective as movies and entertainment. [I love Monty Python and my Stanford PhD wife thinks they are moronic. Both opinions can be correct.]

Still, I’m convinced if you put 100 well-educated movie critics in a room to discuss The Godfather, 95 of them would say the movie is an American classic, and most would probably put one or both of the first two Godfather movies in the Top 20 of all time. The ‘wisdom of the crowd’ represents something real and cannot be ignored.

At the same time, those five Godfather-dismissing critics are no less real and their opinions are no less meritorious — assuming they aren’t pursuing an agenda unrelated to judging the quality of The Godfather.

But that is the problem I fear contaminates too many movie reviews today. Movie critics, by training and platform, are ‘opinion journalists.’ As such, they filter their opinions through a desire to please (impress) an immediate social circle (and bosses), as well as an influence from the mood of the times. We all do that, as it is only human.

But good journalists, including movie critics, fight that tendency — or, at least, I believe they should make the attempt.

In the case of movie criticism, to not do so risks compromising the value of the critiques. At best, it renders the criticism worthless, and at worst, malevolent.

It is fair to ask at this point, what the hell am I talking about?

I am not going to review Wonder Woman 1984 (WW84) here. I enjoy reading movie criticisms, but I don’t enjoy writing them. However, if I did review WW84, it might sound something like this review by Alteori:

As much as I thought Gal Gadot raised her game in WW84, I didn’t think anybody else did. But my overall reaction to WW84 was driven, in part, by what I did before I even saw the movie.

My first mistake (besides grudgingly subscribing to HBOMax — whose horrible, wretched parent company I once worked at for a short time) was to read one of the embargo-period reviews. Those are reviews from movie critics pre-selected by Warner Bros. to see the film prior to a wider release.

Normally, for movies I am excited to see, I avoid the corporate hype and eschew the early reviews. I want my opinion to be uncorrupted by other opinions. WW84 was one of those movies because, as this blog can attest, I am a huge fan of the first Wonder Woman movie (2017), particularly Gal Gadot’s portrayal of the superhero and the way director Patty Jenkins and screenwriter Allan Heinberg avoided turning the film into a platform for some watered-down, partisan political agenda. Wonder Woman (2017) was a film made for everyone.

[By the way, as a complete digression, I don’t care when people mispronounce names, especially when it is a name outside the someone’s native language. But I don’t understand why still 9-out-of-10 movie reviewers pronounce Gal Gadot’s name wrong. It couldn’t be simpler. It is Gal (as in guys and gals) and Guh-dote (as in, ‘my grandmother dotes on me’). Here is Gal to help you with the pronunciation.]

But, for reasons unknown, I decided to read one “Top Critic” review of WW84 before seeing the film myself. I will not reveal the reviewer’s name; yet, after seeing WW84, I have no idea what movie that person saw because it wasn’t the WW84 I saw.

This is the gist of that early review (for which I paraphrase in order to protect the identity and reputation of that clearly conflicted reviewer):

Wonder Woman 1984 is the movie we’ve all been waiting for!

If I had only read the review more closely, I would have seen the red flags. Words and phrases like “largely empty spectacle,” “narratively unwieldy,” “overwrought,” “overdrawn,” and “self-indulgent” were sprinkled throughout, if only I had been open to those hints.

In fact, after reading nearly one hundred WW84 reviews in the last two weeks, I see now that movie critics will often leave a series of breadcrumb clues indicating what they really thought of the movie. At the office they may be shills for the powerful movie industry, but similar to Galen Erso’s design of the Death Star, they will plant the seed of destruction for even the most hyped Hollywood movie. In other words, they may sell their souls to keep their jobs, but they still know a crappy movie when they see one.

Maybe ‘crappy’ is too strong, but WW84 was not a good movie — not by any objective measure that I can imagine. Don’t take my word for it. Read just a few of the reviews on RottenTomatoes.com by movie critics that still put their professional integrity ahead of their party schedule: Hannah WoodheadFionnuala HalliganAngelica Jade Bastién, and Stephanie Zacharek,

It’s not a coincidence that these movie critics are all women. It is clear to me that they have been gifted a special superpower which allows them to see through Hollywood’s faux-wokeness sh*t factory. That male movie critics are too afraid to see it, much less call it out, is further proof that one of the byproducts of the #MeToo movement is that liberal men are increasingly useless in our society. They can’t even review a goddamn movie with any credibility. Why are we keeping them around? What role do they serve?

Alright. Now I’ve gone too far. The vodka martinis are kicking in. I’m going to stop before I type something that generates the FBI’s attention.

I’ll end with this: I still love Gal Gadot and if WW84 had more of her and less of everyone else in the movie, I would have enjoyed the movie more. Hell, if they filmed Gal Gadot eating a Cobb salad for two-and-a-half-hours I would have given the movie two stars out of four.

To conclude, if you get one thing from this essay, it is this: Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote. Gal Guh-dote…

  • K.R.K.

Send comments to: nuqum@protonmail.com

Why the Season 2 finale of ‘The Mandalorian’ matters to so many of us

[Headline graphic: The Mandalorian (Graphic by Gambo7; used under the CCA-Share Alike 4.0 Int’l license]

By Kent R. Kroeger (Source: NuQum.com; December 27, 2020)

________________________________

“As in the case of many great films, maybe all of them, we don’t keep going back for the plot.” – Martin Scorsese

“I don’t care about the subject matter; I don’t care about the acting; but I do care about the pieces of film and the photography and the soundtrack and all of the technical ingredients that made the audience scream. I feel it’s tremendously satisfying for us to be able to use the cinematic art to achieve something of a mass emotion.” – Alfred Hitchcock

________________________________

After over 55-plus years, I can count on two hands and a couple of toes the number of times I’ve cried watching a movie or TV program.

I cried when Mary Tyler Moore turned off that lights at WJM-TV.

I cried when Radar O’Reilly announced Colonel Henry Blake’s death.

I cried when the U.S. Olympic hockey team beat the Soviets in 1980.

I cried when Howard Cosell told us that John Lennon had been killed.

I cried when ET said goodbye to Elliot.

I cried when the Berlin Wall came down in 1989.

I cried when baby Jessica was pulled from a 22-foot well.

I cried when Mandy Moore’s character dies at the end of “A Walk to Remember.”

I cried when Harry Potter and his wife sent their son off to Hogwarts.

I cried when Barack Obama became our 44th president.

I cried when the 33 Chilean miners were rescued.

I cried when the Chicago Cubs won the 2016 World Series.

But I can’t remember crying harder than while watching this season’s final episode of Disney’s “The Mandalorian,” when Luke Skywalker rescues Grogu (more popularly known as ‘Baby Yoda’) from the Empire’s indefatigable, post-Return of the Jedi remnants.

Since its December 18th release on Disney+, YouTube has been flooded with “reaction” videos of Star Wars fans as they watched a CGI-version of a young Luke Skywalker (Mark Hamill) remove his hood before Grogu’s caretaker, Din Djarin (a.k.a., The Mandalorian), and offers to train Grogu in the ways of The Force.

The “reaction” videos range from the highly-staged to the very charming and personal — all are illustrative of the deep affection so many people have for the original Star Wars characters, particularly Luke Skywalker.

For me, however, it is hard to detach from this emotional, collective experience the knowledge that it never would have happened if Lucasfilm (i.e., Disney), under the leadership of Kathleen Kennedy, hadn’t completely botched the Disney sequel movies, starting with “The Force Awakens,” director J. J. Abrams’ visually stunning but soulless attempt at creating a new Star Wars myth, followed by “The Last Jedi,” director Rian Johnson’s inexplicable platform for pissing on the original Star Wars mythos, and ending with “The Rise of Skywalker,” J.J. Abrams’ failed attempt to undo Johnson’s irreparable damage (along with the desecration Abrams himself laid upon the Star Wars brand with “The Force Awakens”).

Though opinions vary among Star Wars fans as to the extent Disney has alienated its core Star Wars audience, almost all agree that Disney’s most unforgivable sin was disrespecting the character of Luke Skywalker, who had been defined during  George Lucas’ original Star Wars trilogy as an incurable optimist with an unbreakable loyalty to his family and friends (Princess Leia Organa and Han Solo).

We cried at Season 2’s end of the “The Mandalorian,” not just for the beauty of the moment, but also because of the depth of Disney and Lucasfilm’s betrayal.

Actor Mark Hamill, himself, as he promoted (!) “The Last Jedi,” perfectly described the cultural vandalism perpetrated by Kennedy, Abrams and Johnson on Luke Skywalker:

“I said to Rian (Johnson), Jedis don’t give up. I mean, even if he had a problem he would maybe take a year to try and regroup, but if he made a mistake he would try and right that wrong. So, right there we had a fundamental difference, but it’s not my story anymore, it’s somebody else’s story and Rian needed me to be a certain way to make the ending effective…This is the next generation of Star Wars, so I almost had to think of Luke as another character — maybe he’s ‘Jake Skywalker.’ He’s not my Luke Skywalker.”

That is not exactly what Johnson wanted to hear from one of his “Last Jedi” actors just as the movie was being released. But Hamill’s words spoke for many long time Star Wars fans.
In fact, many of us believe Disney and Lucasfilm’s Kennedy, with ruthless premeditation, intended to use the Disney sequel movies to malign Lucas’ Star Wars characters (with the exception of Princess Leia) in favor of the Disney-ordained Star Wars cast: Rey Palpatine, Kylo Ren (Ben Solo), Poe Dameron, and Finn.
I’m fairly confident in this prediction: Nobody 10, 20 or 30 years from now is going to care about Rey, Kylo, Poe and Finn. But I’m 99 percent sure we’ll still be talking about Luke Skywalker, if only in recalling how Disney f**ked up one of the most iconic heroes in movies history. Rey inspires no one — including young girls, who apparently were Lucasfilm’s targeted demo with the Disney sequel movies.
Had Disney trusted their own market research, they would have known the only reliable target was the tens of millions of original Star Wars fans (and their children and grandchildren), whose loyalty to Star Wars was proven when they still showed up at theaters for Disney’s three sequel movies, even after their devotion was insulted with the unnecessary diminution of the once dashing and heroic Han Solo (Harrison Ford) and, of course, Luke.
Had Disney treated their core audience with respect, Star Wars fans now might be anticipating Rey’s next cinematic adventure, instead of drowning themselves in the bittersweet giddiness of Luke’s triumphant return on “The Mandalorian.”
To be sure, a lot of Star Wars fans want to put Luke’s return in its proper perspective. We still have to accept that — under the Disney story line — Luke is destined to slump off to a remote island, drinking titty-milk from the teet of a giant alien sea cow while whining that he couldn’t stop his nephew from killing off Luke’s young Jedi pupils (including presumably Grogu).
Despite the joyousness of Luke on “The Mandalorian,” the dark cloud of Abrams and Johnson’s bad storytelling skills still looms large.
But even the biggest Disney critics are allowing themselves to enjoy what Jon Favreau and David Filoni — the creative team behind “The Mandalorian” — are doing for the fans.
One such person is Nerdrotic (Gary Buechler), the bearded crown prince of the amorphous  Fandom Menace — a term used to describe a social-media-powered subculture of disgruntled Star Wars fans who particularly aggrieved at how Lucasfilm has dismantled Star Wars canon, allegedly using the Star Wars brand to pursue a “woke” political agenda at the expense of good storytelling.
“For the first time in a long time, the majority of the fans were happy, and the question you have to ask upfront is, ‘Disney, was it really that hard to show respect to the hero of generations, Luke Skywalker?'” says Buechler. “It must have been, because it took them 8 or 9 years to do it, but when they did do it, it sent a clear message that people still want this type of storytelling, and in this specific case, they want Luke Skywalker because he is Star Wars.”
For me, Luke’s return in “The Mandalorian” is a reminder that great moments are what make movies (and TV shows) memorable, not plot or story lines. People love and remember moments.
As someone who camped out in a dirty theater alleyway in Waterloo, Iowa in the Summer of 1977 to see a movie that was then just called “Star Wars,” I am going to enjoy what Favreau and Filoni gave us on “The Mandalorian” — the moment where the Luke Skywalker we love and remember from childhood returned to Star Wars.
– K.R.K.
Send comments to: nuqum@protonmail.com

Postscript: In recent days, Lucasfilm and Disney social media operatives have been posting messages reminding us that Luke Skywalker himself, Mark Hamill, is a “fan” of the Disney sequel movies, including Rian Johnson’s “The Last Jedi.”

Perhaps that is true. But I also believe Hamill has made it clear in the past couple of days where his heart resides — with the George Lucas’ Luke Skywalker:

News media bias and why its poorly understood

By Kent R. Kroeger (Source: NuQum.com; December 21, 2020)

Few conversation starters can ruin an otherwise pleasant dinner party (or prevent you from being invited to future ones) than asking: Is the news media biased?

If you ask a Democrat, they will tell you the Fox News Channel is the problem (“They started it!” as if explaining to an elementary school teacher who threw the first punch during a playground fight). Ask a Republican and they will say Fox News is just the natural reaction to the long-standing, pervasive liberal bias of the mainstream media.

This past presidential election has poured gasoline on the two arguments.

In late October, the Media Research Council (MRC), a conservative media watchdog group, released research showing that, between July 29 and October 20, 92 percent of evaluative statements about President Trump by the Big Three evening newscasts (ABC, CBS, NBC) were negative, compared to only 34 percent for Democratic candidate Joe Biden. Apart from a few conservative-leaning news outlets, such as the Fox News Channel and The Wall Street Journal, the MRC release was ignored.

When I shared the MRC research with my wife, her reaction was probably representative of many Democrats and media members: “Why wasn’t their coverage 100 percent negative towards Trump?”

The MRC doesn’t need me to defend their research methods, except I will point out that how they measure television news tone has a long history within media research, dating back to groundbreaking research by Michael J. Robinson and Margaret A. Sheehan, summarized in their 1981 book, “Over the Wire and on TV: CBS and UPI in Campaign ‘80.

Here is MRC’s description of their news tone measurement method:

“MRC analysts reviewed every mention of President Trump and former Vice President Biden from July 29 through October 20, 2020, including weekends, on ABC’s World News Tonight, the CBS Evening News and NBC Nightly News. To determine the spin of news coverage, our analysts tallied all explicitly evaluative statements about Trump or Biden from either reporters, anchors or non-partisan sources such as experts or voters. Evaluations from partisan sources, as well as neutral statements, were not included.

As we did in 2016, we also separated personal evaluations of each candidate from statements about their prospects in the campaign horse race (i.e., standings in the polls, chances to win, etc.). While such comments can have an effect on voters (creating a bandwagon effect for those seen as winning, or demoralizing the supports of those portrayed as losing), they are not “good press” or “bad press” as understood by media scholars.”

Besides the MRC, there is another data resource on news coverage tone. It is called the Global Database of Events, Language, and Tone (GDELT) Project and was inspired by the automated event-coding work of Georgetown University’s Kalev Leetaru and political scientist Philip Schrodt (formerly of Penn State University).

The GDELT Project is described as “an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day.”

[For a description of the many datasets available through GDELT, you can go here.]

The GDELT Project’s goals are ambitious to say the least, but perhaps may shed some light on the tone of news coverage during this past presidential election.

It is worth a look-see.

For the following analysis, I queried GDELT’s Global Online News Coverage database, filtering it down to US-only daily news articles that mention either Joe Biden or Donald Trump (but not both) from January 15, 2017 to November 22, 2020.

[The APIs used to query the GDELT database are available in this article’s appendix.]

Resulting from these queries were two metrics for each candidate: The first was the daily volume of online news coverage (measured as the percent of monitored articles), and the second was the average daily tone of online news coverage.

The second metric deserves some additional explanation.

GDELT uses Google’s Natural Language API to inspect a given text and identify the prevailing emotional opinion within the text, especially to determine a writer’s attitude as positive, negative or neutral. A text with a summary score over zero indicates that it was positive in overall tone. The higher a score, the more positive the text’s tone. Similarly, negative values indicate an overall negative tone. Values near zero indicate a text that is either neutral (i.e., no clear tone) or contains mixed tones (i.e., both positive and negative emotions).

For each news article, tone is calculated at the level of the entire article, not the tone of the sentence(s) mentioning Biden or Trump, so a negative article with a positive mention of Biden or Trump will still be scored negative. Finally, online news articles that mentioned both Biden and Trump were excluded from the analysis (83% of Biden articles mentioned both candidates, while only 9% of Trump articles did). In total, 4,593 online news articles were analyzed.

The resulting time-series data set contained five variables: (1) Date, (2) Daily Volume of Biden-focused Online News Coverage, (3) Average Daily Tone of Biden-focused Online News Coverage, (4) Daily Volume of Trump-focused Online News Coverage, and (5) Average Daily Tone of Trump-focused Online News Coverage.

From this data, I computed Biden’s net advantage in online news coverage tone by multiplying, for each candidate, the day’s news volume by the average news coverage tone. Trump’s volume-weighted news coverage tone was then subtracted from Biden’s.

Figure 1 (below) shows Biden’s net advantage in online news coverage tone from January 15, 2017 (near the beginning of Trump’s presidential term) to November 22, 2020.

Figure 1: Biden’s Tone Advantage over Trump in US Online News Coverage

Image for post

According to the GDELT data, the tone of Biden-focused US online news coverage was far more positive than Trump-focused news coverage. In fact, online news coverage never favored Trump — not for one single day!

While there has been significant variation in Biden’s tone advantage since 2017— most notably since August 2020 when Biden has seen his tone index advantage decrease from 8.9 to 1.3 by late November 2020 — it is remarkable that even when the U.S. economy was booming in late 2019, well before the the coronavirus pandemic had impacted the US, Biden was enjoying a significant advantage in online news tone.

Supporting the validity of the GDELT tone data, variation in Biden’s tone advantage fluctuates predictably with known events that occurred during the 2020 campaign.

In a March 25, 2020, interview with Katie Halper, former Biden staff member Tara Reade alleged that Biden had pushed her against a wall, kissed her, put his hand under her skirt, penetrated her with his fingers, and asked, “Do you want to go somewhere else?”

Beyond this allegation, there is only circumstantial evidence supporting Reade’s charge against Biden. Still, the impact of this allegation manifests itself in how Biden’s tonality advantage varied over time.

On March 25, 2020, Biden enjoyed a 7.7 tonality advantage over Trump. That advantage, however, immediately fell in the weeks following Halper’s Reade interview, reaching a relative low of 5.4 on May 11th.

Soon after, Biden’s tonality advantage began to recover rapidly, likely due to two major news stories in May. The first, on May 8th, marked the release of U.S. unemployment data showing the highest unemployment rate (14.7%) since the Great Depression, mainly due to job losses from the COVID-19 pandemic. These new economic numbers put the Trump administration in a clear defensive position, despite the fact that similar pandemic-fueled economic declines were occurring in almost every major economy in the world.

On May 25th, the second event — the death of George Floyd while being physically immobilized by a Minneapolis police officer — sparked a national outcry against police violence against African-Americans. Whether this outrage should have been directed at Trump (as it was by many news outlets) will be a judgment left to historians. What can be said is that Biden’s tone advantage over Trump trended upwards into the summer, reaching an 2020 peak of 9.0 on July 25th.

In the post-convention media environment, which included intermittent media coverage of the Hunter Biden controversy, Biden’s tone advantage declined for the remainder of the time covered in this analysis.

Admittedly, the GDELT data is imperfect in that it does not allow analysis at a sentence- or paragraph-level. Still, the finding in Figure 1 that Biden-focused news articles have been far more positive than Trump-focused news articles is consistent with the overall finding in the MRC tonal analysis of the 2020 presidential election.

Is this conclusive evidence of the news media’s anti-Trump bias? No. But it should inspire a further inquiry into this question, and to do that will require some methodological finesse. That is, it will require far more than just measuring the tone of news coverage.

In a country where President Trump’s approval rating has hovered between 40 and 46 percent through most of his presidency, the fact that — at a critical time in the election — the network TV news programs were over 90 percent negative towards Trump offers some face validity to the anti-Trump media bias argument.

But my wife’s gut reaction to the MRC research contains a profound point: What if Trump deserved the overwhelming negative coverage? After all, is it the job of the news media to reflect public opinion? To the contrary, by definition, an objective news media should be exclusively anchored to reality, not ofttimes fickle variations in public sentiment.

Subsequently, the central problem in measuring media bias is finding a measure of objective reality by which to assess a president’s performance. Most everything we hold a president accountable for — the economy, foreign policy, personal character, etc. — is subject to interpretations and opinions that are commonly filtered by the news media through layers of oversimplifications, distortions and other perceptual biases.

Perhaps we can use a set of proxy measures? The unemployment rateGross domestic product growthStock prices. A president’s likability score. But to what extent does a president have an impact on those metrics? Far less than we may want to believe.

And now add to the equation a global pandemic for which Trump’s culpability, though widely asserted in the national news media, is highly debatable but reckless to dismiss out-of-hand.

How can the U.S. news media possibly be equipped to judge a president’s performance by any objective, unbiased standard?

It isn’t. And, frankly, it is likely the average American doesn’t require news organizations to be so equipped. Despite survey evidence from Pew Research suggesting news consumers dislike partisanship in the news media — most likely an artifact of the social desirability bias commonly found in survey-based research — recent studies also show U.S. news consumers choose their preferred news outlets through partisan and ideological lenses.

According to political scientist Dr. Regina Lawrence, associate dean of the University of Oregon’s School of Journalism and Communication and research director for the Agora Journalism Center, selection bias is consciously and unconsciously driving news consumers towards news outlets that share similar partisan and ideological points of view — and, in the process, increases our country’s political divide:

“Selective exposure is the tendency many of us have to seek out news sources that don’t fundamentally challenge what we believe about the world. We know there’s a relationship between selective exposure and the growing divide in political attitudes in this country. And that gap is clearly related to the rise of more partisan media sources.”

The implication of this dynamic on how journalists do their jobs is significant. There is little motivation across all levels of the news-gathering process — from the corporate board room down to the beat reporter — to put an absolute premium on covering news stories from an objective point of view.

Instead, journalists and media celebrities are motivated by the same psychological and economic forces as the rest of us: career advancement, prestige and money. And to succeed in the news business today, a journalist’s output must fit within the dominant (frequently partisan) narratives of his or her news organization.

In a trailblazing data-mining-based study by the Rand Corporation on how U.S. journalism has changed since the rise of cable news networks and social media, researchers found “U.S.-based journalism has gradually shifted away from objective news and offers more opinion-based content that appeals to emotion and relies heavily on argumentation and advocacy.”

And the result of this shift? Viewership, newsroom investments and profits at the three major cable news networks have significantly increased in the past two decades, at the same time that news consumers have shifted their daily news sources away from traditional media (newspapers and TV network news) towards new media outlets (online publications, news aggregators [e.g., Drudge Report], blogs, and social media). In 2019, the major U.S. media companies — which include assets and revenue streams far beyond those generated from their news operations — had a total market capitalization exceeding $930 billion.

Why then should we be surprised that today’s broadcast and print journalists are not held to a high objectivity or accuracy standard? Their news organizations are prospering for other reasons.

During the peak of the Russiagate furor, as many journalists were hiding behind anonymous government sources, few journalists and producers at CNN, MSBNC, The New York Times or Washington Post openly challenged the basic assumptions of that conspiracy theory which asserted that Trump had colluded with the Russians during the 2016 election — a charge that, in the end, proved baseless.

Apart from ABC News chief investigative Brian Ross being disciplined for his false reporting regarding Trump campaign contacts with Russia, I cannot recall a single national news reporter being similarly disciplined for bad Russiagate reporting (and there was a lot of bad Russiagate reporting).

On the other side of the coin, there are certainly conservative news outlets where effusive Trump coverage is encouraged, but those cases are in the minority compared to the rest of the mainstream media (a term I despise as I believe the average national news outlet actively restricts the range of mainstream ideas presented to the news consuming public — and, furthermore, there is nothing ‘mainstream’ about the people who populate our national news outlets).

Being from Iowa, I’ve been spoiled by the number of times I’ve met presidential candidates in person. That, however, is not how most Americans experience a presidential election.

Americans generally experience presidential elections via the media, either through direct exposure or indirect exposure through friends, family and acquaintances; consequently, this potentially gives the news media tremendous influence over election outcomes.

According to Dr. Lawrence, the most significant way the news media impacts elections is through who and what they cover (and who and what they don’t cover). “The biggest thing that drives elections is simple name recognition.”

If journalists refuse to cover a candidate, their candidacy is typically toast. But that is far from the only way the news media can influence elections. How news organizations frame an election — which drives the dominant media narratives for that election — can have a significant impact.

The most common frame is that of the horse race in which the news media — often through polling and judging the size and enthusiasm of crowds —can, in effect, tell the voting public who is leading and who has the best chance of winning.

“We know from decades of research that the mainstream media tend to see elections through the prism of competition,” according to Lawrence. “Campaigns get covered a lot like sports events, with an emphasis on who’s winning, who’s losing, who’s up, who’s down, how they are moving ahead or behind in the polls.”

There are other narratives, however, that can be equally impactful — such as narratives centered on a candidate’s character (e.g., honesty, empathy) or intellectual capacity.

Was Al Gore as stiff and humorless as often portrayed in the 2000 campaign? Was George W. Bush as intellectually lazy or privileged as implied in much of the coverage from that same campaign?

Even journalists with good intentions can distort reality when motivated to fit their stories into these premeditated story lines.

More ominous, however, is that possibility that news organizations with strong biases against a particular candidate or political party, as they can manipulate their campaign coverage in such a way that even objective facts can be framed to systematically favor the voter impressions formed for one candidate over another.

Did that happen in the 2020 presidential election? My inclination is to say yes, but I go back to the original question posed in this essay: Did Donald Trump deserve the overwhelming negative coverage he received across large segments of the national news media?

Without clearly defining and validly measuring the objective, unbiased metrics by which to answer that question, there is no possible way to give a substantive response.

  • K.R.K.

Send comments to: nuqum@protonmail.com

GDELT API for Biden:

https://api.gdeltproject.org/api/v2/summary/summary?d=web&t=summary&k=Biden+-Trump&ts=full&fsc=US&svt=zoom&stt=yes&stc=yes&sta=list&

GDELT API for Trump:

https://api.gdeltproject.org/api/v2/summary/summary?d=web&t=summary&k=Trump+-Biden&ts=full&fsc=US&svt=zoom&stt=yes&stc=yes&sta=list&

Our fascination with “The Queen’s Gambit”

[Headline photo: Judit Polgár, a chess super grandmaster and generally considered the greatest female chess player ever (Photo by Tímea Jaksa)]

By Kent R. Kroeger (Source: NuQum.com; December 8, 2020)

One the greatest joys I’ve had as a parent is teaching my children how to play chess.

And the most bittersweet moment in that effort is the day (and match) when they beat you and you didn’t let them.

I had that moment during the Thanksgiving weekend with my teenage son when, in a contest where early on I sacrificed a knight to take his queen (and maintained that advantage for the remainder of the contest), I became too aggressive and left my king vulnerable. As I realized the mistake, it was two moves too late. He pounced and mercilessly ended the match.

He didn’t brag. No teasing. Not even a firm handshake. He checkmated me, grabbed a bowl of blueberries out of the fridge, and coolly went to the family room to play Call of Duty with his friends on his Xbox.

I was left with an odd feeling, common among parents and teachers, I suspect. A feeling of immense pride, even as my ego was genuinely bruised.

That is the nature of chess — a game that is both simple and infinitely complex, and offers no prospect of luck for the casual or out-of-practice player. With every move there are only three possibilities: You can make a good decision, a bad decision, or maintain the status quo.

For this, I love and hate chess.

Saying ‘Chess has a gender problem’ is an understatement

My father taught me chess, as his father taught him.

Growing up in the 70s, I had a picture of grandmaster legend Bobby Fischer pasted on my bedroom door, whose defeat of Boris Spassky for the 1972 world chess title ranks with the U.S. hockey team’s 1980 “Miracle on Ice” as one of this country’s defining Cold War “sports” victories over the Soviet Union. That is not an exaggeration.

In part because of Fischer’s triumph, I played chess with my childhood friends more often than any other game, save perhaps basketball and touch football.

But despite chess being prominent in my youth, I have no memories of playing chess with members of the opposite sex. My mom? Bridge was her game of choice. The girls in my neighborhood and school? I cannot recall even one match with them. Granted, between the 5th and 11th grades, I didn’t interact with girls much for any reason. And when I did later in high school, by then my chess playing was mostly on hold until graduate school, except for the occasional holiday matches with my father and brothers.

Any study on the sociohistorical determinants of gender-based selection bias should consider chess an archetype of this phenomenon.

The aforementioned chess prodigy, Bobby Fischer, infamously said of women: “They’re all weak, all women. They’re stupid compared to men. They shouldn’t play chess, you know. They’re like beginners. They lose every single game against a man.”

Any pretense that grandmaster chess players must, by means of their chessboard skills, also be smart about everything else, is easily dispensed by referencing the words that would frequently come out of Fischer’s mouth when he was alive.

Radio personality Howard Stern once observed that many of the most talented musicians he’s interviewed often lack the ability (or confidence) to talk about anything else except music (and perhaps sex and drugs): “You can’t become that good at something without sacrificing your knowledge in other things.”

That may be one of the great sacrifices grandmaster chess players also make. As evidenced by his known comments on women and chess, Fischer is ill-informed on gender science. Even some contemporary chess greats, such as Garry Kasparov and Nigel Short, have uttered verbal nonsense on the topic.

Writer Katerina Bryant recently reflected on the persistent ignorance within the chess world about gender: “Many of us mistake chess players for the world’s best thinkers, but laying out a champion’s words on the table make the picture seem much more fractured. It’s a fallacy that someone can’t be both informed and ignorant.”

Why we are watching the “The Queen’s Gambit”

Over the Thanksgiving holiday, the intersection in time of losing to my son at chess and the release of Netflix’ new series, The Queen’s Gambit, seemed like an act of providence. The two events reignited my interest in chess and launched a personal investigation into the game’s persistent and overwhelming gender bias.

Of the current 80 highest-rated chess grandmasters, all are men. The highest-rated woman is China’s Hou Yifan (#82).

That ain’t random chance. Something is fundamentally wrong with how the game recruits and develops young talent. The Queen’s Gambit, a fictional account of a young American woman’s rise in the chess world during the 1950s and 60s, speaks directly to that malfunction. While the show mostly focuses on drug addiction, dependency, and emotional alienation, I believe its core appeal is in how it addresses endemic sexism — in this case, the gender bias of competitive chess.

For those that don’t know, The Queen’s Gambit is a 7-part series (currently streaming on Netflix) about Beth Harmon, an orphaned chess prodigy from Kentucky who rises to become one of the world’s greatest chess player while struggling with emotional problems and drug and alcohol dependency. Beth is a brilliant, intuitive chess player…and a total mess.

The Queen’s Gambit is more fun to watch than it sounds. My favorite scene is in Episode 1 when, as a young girl, she plays a simultaneous exhibition against an entire high school chess club and beats them — all boys, of course — easily.

The TV series not only perfectly aligns with the current political mood, its gender bias message is not heavy-handed and is easily digestible. It is fast food feminism for today’s cable news feminists.

Aside from the touching characterization of Mr. Shaibel, the janitor at Beth’s orphanage who introduces her to chess, almost every other man in The Queen’s Gambit is either sexist, a substance abuser, arrogant, or emotionally stunted. What saves The Queen’s Gambit’s cookie-cutter politics from becoming overly turgid and preachy, however, is that Beth isn’t much better, or at least not until the end. Anya Taylor-Joy, the actress who plays Beth, is brilliant throughout the series and alone makes the show’s seven hours running time worth the personal investment.

But beyond the show’s high-end acting and production values, its hard not to enjoy a television show about a goodhearted but dysfunctional protagonist (i.e., substance addicted) who must interact with other dysfunctional people in an equally dysfunctional time?

The audience gets all of that from The Queen’s Gambit, along with a thankfully minor but clunky anti-evangelical Christian, anti-anti-Communism side story.

It is the perfect formula for getting love from the critics and attracting an audience, and the reviews and ratings for The Queen’s Gambit prove the point:

(The Queen’s Gambit) is the sort of delicate prestige television that Netflix should be producing more often.” – Richard Lawson, Vanity Fair

Just as you feel a familiar dynamic forming, in which a talented woman ends up intimidating her suitors, The Queen’s Gambit swerves; it’s probably no coincidence that a story about chess thrives on confounding audience expectations.” – Judy Berman, TIME Magazine

The audiences have lined up accordingly. In its first three weeks after release (from October 23 to November 8), Nielsen estimates that The Queen’s Gambit garnered almost 3.8 billion viewing minutes in the U.S. alone.

As the streaming TV ratings methodologies are still in their relative infancy, I prefer to look at Google search trends when comparing media programs and properties. In my own research, I have found that Google searches are highly correlated with Nielsen’s streaming TV ratings (ranging between a Pearson correlation of 0.7 and 0.9 from week-to-week).

Figure 1 shows a selection of the most popular streaming TV shows in 2020 and their relative number of Google searches (in the U.S.) from July to early December [The Queen’s Gambit is the dashed black line.]. While The Queen’s Gambit hasn’t had the peak number of searches as some other shows (The Umbrella Academy, Schitt’s Creek, and The Mandalorian), it has sustained a high level of searching over a longer period of time. Over a 5-week period, only The Mandalorian has had a cumulative Google Trends Index higher than The Queen’s Gambit (1,231 versus 1,150, respectively). The next highest is The Umbrella Academy at 1,030.

Figure 1: Google Searches for Selected Streaming TV Shows in 2020

Whatever the core reason for the popularity of The Queen’s Gambit, there is no denying that the show has attracted an unprecedented mass audience for a streaming TV series.

It makes me think Netflix might consider making more than seven episodes.

My one big annoyance with “The Queen’s Gambit”

Mark Twain once said of fiction: “Truth is stranger than fiction, but it is because fiction is obliged to stick to possibilities; truth isn’t.”

While I appreciate Twain’s sentiment — fiction should not be too restrained by truth — he clearly never had to explain to his teenage son that fire-breathing dragons did not exist in the Middle Ages, despite what Game of Thrones suggests. I fear The Queen’s Gambit will launch of generation of people who think an American woman competed internationally in chess in the 1960s and thereby diminish the much more profound accomplishments of an actual female chess prodigy, Judit Polgár, arguably the greatest female chess player of all time and who famously refused to compete in women-only chess tournaments.

While her achievements would occur two decades after Beth Harmon’s fictional rise, Polgár, a 44-year-old Hungarian, fought the real battle and offers the more substantive and entertaining story, in my opinion. It would be like Hollywood making a movie about a fictional female law professor defeating institutional sexism and rising to the Supreme Court in the 1960s, when the true story of Ruth Bader Ginsburg already exists.

Judit Polgár, a chess super grandmaster, demonstrating the “look” (Photo by Stefan64)

Granted, Walter Tevis’ book upon which The Queen’s Gambit is based was published in 1983, before Polgár was competing internationally, but for someone who followed Polgár’s career, The Queen’s Gambit‘s inspirational tale rings a bit hollow.

No American woman was competing at the highest levels of chess during The Queen’s Gambit time frame of the 1950s and 60s. In stark contrast, Polgár actually did that from 1990 to 2014, achieving a peak rating of 2735 in 2005 — which put her at #8 in the world at the time and would place her at #20 in the world today.

Perhaps only chess geeks understand how rare such an accomplishment is in chess, regardless of gender.

Polgár, at the age of 12, was the youngest chess player ever, male or female, to become ranked in the World Chess Federations’s top 100 (she was ranked #55) and became a Grandmaster at 15, breaking the youngest-ever record previously held by former World Champion Bobby Fischer.

The list of current or former world champions Polgár has defeated in either rapid or classical chess is mind-blowing: Magnus Carlsen (the highest rated player of all time), Anatoly KarpovGarry KasparovVladimir KramnikBoris SpasskyVasily SmyslovVeselin TopalovViswanathan AnandRuslan PonomariovAlexander Khalifman, and Rustam Kasimdzhanov. [I’m geeking out just reading these names.]

And most amazing of all — Polgár may not be the best chess player in her family.

Still, I want to be clear: I loved the The Queen’s Gambit. I don’t sit up at 3 a.m. watching Netflix on my iPhone unless I have a good reason — such as watching Czech porn. I merely offer a piece of criticism so that some poor sap 20 years from now doesn’t think The Queen’s Gambit is based on a true story. It most certainly isn’t. It is a pure Hollywood-processed work of fiction.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

 

Did analytics ruin Major League Baseball?

[Headline graphic: Billy Beane (left) and Paul DePodesta (right). The General Manager and assistant General Manager, respectively, for the 2002 Oakland A’s and who inspired Michael Lewis’ book, “Moneyball.” (Photo by GabboT; used under the CCA-Share Alike 2.0 Generic license.)]

By Kent R. Kroeger (Source: NuQum.com; November 25, 2020)

As he announced his resignation from the Chicago Cubs as that organization’s president of baseball operations in November, Theo Epstein, considered by many the High Priest of modern baseball analytics, made this shocking admission about the current state of baseball:

“It is the greatest game in the world but there are some threats to it because of the way the game is evolving. And I take some responsibility for that because the executives like me who have spent a lot of time using analytics and other measures to try to optimize individual and team performance have unwittingly had a negative impact on the aesthetic value of the game and the entertainment value of the game. I mean, clearly, you know the strikeout rates are a bit out of control and we need to find a way to get more action in the game, get the ball in play more often, allow players to show their athleticism some more and give the fans more of what they want.”

Epstein’s comments were painful for me on two fronts. First, he was leaving the only baseball team I’ve ever loved, having helped the Cubs win the only World Series championship of my lifetime. Second, he put a dagger in the heart of every Bill James and sabermetrics devotee who, like myself, have spent countless hours pouring through the statistical abstracts for Major League Baseball (MLB) and the National Football League on a quest to build the perfect Rotisserie league baseball team and fantasy football roster.

There is no better feeling than the long search and discovery for those two or three “value” players who nobody else thinks about and who can turn your Rotisserie or fantasy team into league champs.

In a direct way, sports analytics are the intellectual steroids for a generation of sports fans (slash) data geeks who love games they never played beyond high school, if even then.

Epstein’s departure was not entirely a surprise. The Cubs have not come close to their glorious World Series triumph in 2016—though it has to pin that on Epstein. The Cubs still have (when healthy) one of the most talented rosters in baseball. Instead, the surprise was Epstein’s targeting ‘analytics’ has one of the causes of baseball’s arguable decline.

Like many baseball fans, I’ve assumed baseball analytics—immortalized in Michael Lewis’ book “Moneyball” about the 2002 Oakland A’s and its general manager Billy Beane, who hired  a Yale economics grad, Paul DePodesta, to assist him in building a successful small market (i.e., low payroll) baseball team—helped make the MLB, from top-to-bottom, more competitive.

In the movie based on Lewis’ book, starring Brad Pitt and Jonah Hill, this scene perfectly summarizes the value of analytics in baseball (and, frankly, could apply to almost every major industry):

Peter Brand (aka. Paul DePodesta, as played by Jonah Hill):

“There is an epidemic failure within the game to understand what is really happening and this leads people who run major league baseball teams to misjudge their players and mismanage their teams…

…People who run ball clubs think in terms of buying players. Your goal shouldn’t be to buy players. Your goal should be to buy wins, and in order to buy wins you need to buy runs.

The Boston Red Sox see Johnny Damon and they see a star who’s worth seven-and-a-half million dollars. When I see Johnny Damon what I see is an imperfect understanding of where runs come from. The guy’s got a great glove and he’s a decent lead-off hitter. He can steal bases. But is he worth seven-and-a-half million a year?

Baseball thinking is medieval. They are asking all the wrong questions.”

While Beane and DePodesta may have lacked world championships after they introduced analytics into the process, the A’s did have nine winning seasons from 2002 to 2016 during their tenure, which is phenomenal for a small-market, low-payroll team.

At the team-level, the 21st-century A’s are the embodiment of how analytics can help an organization.

But is Epstein still right? Has analytics hurt baseball at the aggregate level?

Let us look at the facts…

Major League Baseball has a Problem

Regardless of the veracity of Epstein’s indictment of analytics for its net role in hurting the game of baseball, does professional baseball have a problem?

The answer is a qualified ‘Yes.’

These two metrics describe the bulk of the problem: (1) Average per game attendance and (2) World Series TV viewership. Since the mid-1990s, baseball game attendance relative to the total U.S. population has been in a near constant decline, going from a high of 118 game attendees (per 1 million people) in the mid-1990s to 98 game attendees (per 1M) in the late 2010s (see Figure 1). At the same time, the long-term trend is still positive. That cannot be discounted.

Figure 1: MLB per game attendance (per 1 million people) (Source: baseball-reference.com)

While the relative decline is significant, the real story of MLB attendance since the league’s inception in the late-19th century is the surge in attendance after World War II, a strong decline after that until the late-1960s, and a resurgence during the 1970s and 80s. In comparison, the attendance decline per capita since the mid-1990s has been relatively small.

Consider also that despite a per capita decline in game attendance since the 1990s, total season attendance has still grown. In 1991, 56.8 million MLB tickets were sold; by 2017, 72.7 million tickets were sold. This increase in gross ticket sales has been matched by a steady rise in MLB ticket prices as well. The average cost of an MLB baseball game in 1991 was $142, but by 2017 that figure increased to $219 (a 176 percent increase). In that context, the 15 to 20 percent decline in game attendance (per capita) seems more tolerable and far from catastrophic. In fact, if it weren’t for this next metric, baseball might be in great shape, even if its relative popularity is in decline.

The TV ratings and viewership for MLB’s crown jewel event, the World Series, has been in a near straight-line decline since the mid-1970s when Billy Martin’s New York Yankees and the Tommy Lasorda-led Los Angeles Dodgers were the sport’s dominant franchises, and happened to be in this nation’s two largest cities. Big market teams in the World Series is always good for TV ratings.

As seen in Figure 2, average TV viewership for the World Series (the orange line) has declined from a high of 44.3 million in 1978 (Yankees vs. Dodgers) to just under 9.8 million in the last World Series (Dodgers vs. Rays).

Figure 2: The TV Ratings and Viewership (average per game) for the World Series since 1972 (Source: Nielsen Research)

Even with the addition of mobile and online streaming viewers—-which lifts the 2020 World Series viewership number to 13.2 million—the decline in the number of eyeballs watching the World Series since the 1970s has been dramatic.

In combination with the trends in game attendance, the precipitous decline in live viewership offers one clear conclusion: Relatively fewer people are going to baseball games or watching the them on TV or the internet. That’s a formula for an impending financial disaster among major league baseball franchises.

While stories of baseball’s imminent death are exaggerated, baseball does have serious problems. But what are they exactly? And how has analytics impacted those probable causes?

Are baseball’s problems bigger than the game itself?

Before looking within the game of baseball itself (and the role of analytics) to explain its relative popularity decline, we must consider the broader context.

Sports fans today demand something different from what MLB offers

Living with a teenage son who loves the NBA and routinely mocks my love of baseball, I see a generational divide that will challenge any attempt to update a sport once considered, without debate, to be America’s pastime. Kids (and. frankly, many of their parents) don’t have the patience or temperament to appreciate the deep-rooted intricacies of a game where players spend more time waiting than actually playing. Only 10 percent of a baseball game involves actual action, according to one study. For kids raised on Red Bull and Call of Duty, baseball is more like a horse and buggy than a Bugatti race car.

And the in-game data supports that assertion. In 1970. a nine-inning major league baseball game took, on average, two-and-a-half hours to complete. In 2020, it takes three hours and six minutes. By comparison, a World Cup soccer match takes one hour and 50 minutes from the moment the first whistle blows. An NBA game takes about two-and-a-half hours.

Baseball is too slow…and getting slower.

[For a well-constructed counterargument to the ‘too slow’ conclusion, I invite you to read this essay.]

In contrast, the NBA and World Cup soccer possess near constant action. Throw in e-games (if you consider those contests a sport) and it is reasonable to conjecture that baseball is simply a bad fit for the times. Even NFL football, whose average game takes over three hours, has challenges in that regard.

Did analytics lead to longer baseball games? Let us examine the evidence.

Figure 3 shows the long-term trend in the length of 9-inning MLB games divided into baseball ‘eras’ as defined by Mitchell T. Woltring, Jim K. Rost, Colby B. Jubenville in their 2018 research paper published by Sports Studies and Sports Psychology. They identified five distinct eras in major league baseball: (1) “Dead Ball” (1901 to 1919), (2) “Live Ball” (1920 to 1941), (3) “Integration” (1942 to 1960), (4) “Expansion” (1961 to 1976), (5) “Free Agency” (1977 to 1993), (6) “Steroids” (1994 to 2005) and (7) “Post-Steroids” (2006 to 2011). However, for this essay, I relabeled their ‘post-Steroids era’ as the ‘Analytics era’ and extended it to the present.

(Note: MLB game length was not consistently measured until the “Integration era.”)

Figure 3: Average length of a 9-inning MLB game since 1946.

Though I will share upon request the detailed statistical analysis of the intervention effects of the baseball eras on the average length of MLB games, the basic findings are straightforward:

(1) The average length of 9-inning MLB games significantly increased during the ‘Integration,’ ‘Free Agency,’ and ‘Analytic’ eras, but did not increase during the ‘Expansion’ and ‘Steroids’ eras.

(2) The long-term trend was already pointing up before the ‘Analytics era’ (+50 seconds per year), though analytics may have had a larger marginal effect on game length (+78 seconds per year).

As to why the ‘Analytics era’ saw an increase in game times, one suggested explanation is that the ‘Steroids era’ disproportionately rewarded juiced-up long-ball hitters who tended to spend less time at the plate. In contrast, though the ‘Analytics era’ also has emphasized home run hitting, the players hitting home runs are now more patient. According to baseball writer Fred Hofstetter, pitchers have also changed:

“This (increase in game times) won’t surprise anyone who follows the game closely. The general demographic change trending into 2020:

  1. Patient hitters are replacing free swingers
  2. Hard-throwing strikeout-getters are replacing pitch-to-contact types

Pitchers who throw harder tend to take more time between pitches.9 Smart hitters take more pitches. There are more pitches with more time between them. The result is a rising average of time between pitches.”

Are these changes in the game related to analytics? It is hard to know given the concurrent (and assumed) decline in steroid use in the 2000s MLB, but the apparent consensus is that the pitcher-batter dynamics since 2000 have been more sophisticated and time-consuming than during the ‘Steroid era.’

My conclusion on the impact of analytics on the length of MLB baseball games: Unclear.

Are there other aspects of baseball affected by analytics?

Investigating the role of analytics in 21st century baseball is complicated by the confounding effects of other changes going on in the game around the same time — the most obvious being MLB’s increased enforcement of its performance enhancing drug policies. But sports writer Jeff Rivers notes another ongoing trend: this country’s best athletes are increasingly choosing football and basketball over baseball, though this trend may have been going on for some time.

“Major League Baseball used to offer its athletes the most prestige, money and fame among our nation’s pro team sports, but that hasn’t been true for decades,” writes Rivers. “Consequently, Major League Baseball continues to lose in the competition for talent to other major pro team sports.”

It is also possible analytics have exacerbated this supposed decline in athlete quality by discouraging some of baseball’s most exciting plays.

“The focus on analytics in pro sports has led to more scoring in the NBA…but fewer stolen bases and triples, two of the game’s most exciting plays, in pro baseball,” asserts Rivers.

Is there really a distinct ‘Analytics era’ in baseball?

Another problem in assessing the role of baseball analytics is that the ‘Analytics era’ (what I’ve defined as 2006 to the present) may not be that distinct.

Henry Chadwick invented the baseball box score in 1858 and, by 1871, statistics were consistently recorded for every game and player in professional baseball. In 1964, Earnshaw Cook published his statistical analysis of baseball games and players and seven years later the Society for American Baseball Research (SABR) was founded.

In the early 1970s, as statistics advanced as a topic among fans, Baltimore Orioles player Davey Johnson was writing FORTRAN computer code on an IBM System/360 to generate statistical evidence supporting his belief that he should bat second in the Orioles lineup (his manager Earl Weaver was not convinced, however).

In 1977, Bill James published his first annual Baseball Abstracts which, through the use of complex statistical analyses, argued that many of the popular performance metrics — such as batting average — were poor predictors of how many runs a team would score. Instead, James and other SABRmetricians (as they would be called) argued that a better measure of a player’s worth is his ability to help his team score more runs than the opposition. Instead, the SABRmetricians initially preferred metrics such as On-Base Percentage (OBP) and Slugging Percentage (SLG) to judge player values and would later prefer combining those metrics to create the On-base Plus Slugging (OPS) performance metric.

[Note: OBP is the ratio of the batter’s times-on-base (TOB) (which is the sum of hits, walks, and number of times hit by pitch) to their number of plate appearances. SLG measures a batter’s productivity and is calculated as total bases divided by at bats. OPS is simply the sum of OBP and SLG.]

Batting averages and pitchers’ Earned-Run-Averages (ERA) have been a systematic part of player evaluations since baseball’s earlier days. Modern analytics didn’t invent most of the statistics used today to assess player value, but merely refined and advanced them.

Nonetheless, there is something fundamentally different in how MLB players values are assessed today than in the days before Billy Beane, Paul DePodesta and Moneyball.

But when did analytics truly take over the talent acquisition process in major league baseball? There is no single, well-defined date. However, many baseball analysts point to the 2004 Boston Red Sox, whose general manger was Theo Epstein, as the first World Series winner to be significantly driven by analytics.

Something unique and profound was going on in major league baseball’s front offices from the time between Billy Beane’s 2002 A’s and the Boston Red Sox’ 2007 World Series win, their second championship in four years.

By 2009, most major league baseball teams had a full-time analytics staff working in tandem with their traditional scouting departments, according to Business Administration Professor Rocco P. Porreca.

So, why did I pick 2006 as the start of the ‘Analytics era’? No definitive reason except that is roughly the halfway point between the release of Lewis’s book Moneyball and 2009, the point at which most major league baseball teams had stood up a formal analytics department. It would have been equally defensible to set 2011 or 2012 as the starting point for the ‘Analytics era’ as many of the aggregate baseball game measures we are about to look at changed direction at around that time.

The Central Mantra of Baseball Analytics: “He get’s on base”

Lewis’ book Moneyball outlined the baseball player attribute 2002 A’s assistant general manger Paul DePodesta’s sought after most when evaluating talent: Select players that can get on base.

This scene from the movie Moneyball drives home that point:

As the 2002 A’s scouting team identify acquisition prospects, the team’s general manger, Billy Beane singles out New York Yankees outfielder David Justice:

A’s head scout Grady Fuson:  Not a good idea, Billy.

Another A’s scout:  Steinbrenner’s so pissed at his decline that he’s willing to eat a big chunk of his contact just to get rid of him.

Billy Beane:  Exactly.

Fuson: Ten years ago, David Justice—big name. He’s been in a lot of big games. He’s gonna really help our season tickets early in the year, but when we get in the dog days in July and August, he’s lucky if he’s gonna hit his weight…we’ll be lucky if we get 60 games out of him. Why do you like him?

[Beane points at assistant general manager Peter Brand (aka. Paul DePodesta)]

Peter Brand: Because he get’s on base.

This was the fundamental conclusion analytic modelers started driving home to a growing number of baseball general managers after 2002.  Find players that can get on base.

And Theo Epstein was among the first general managers to drink the analytics Kool-Aid and he did it while leading one of baseball’s richest franchises — the Boston Red Sox. Shortly after the 2002 World Series, the Red Sox hired the 28-year-old Epstein, the youngest general manager in MLB history, to help them end their 86-year World Series drought. Two years later, the Red Sox and Epstein did just that, and one of the reasons cited for the Red Sox success was Epstein’s use of analytics for player evaluations. Eventually, Epstein would take his analytics to the Chicago Cubs in 2011, who then ended their 108-year championship drought five years later.

Until Epstein’s departure from the Cubs, there has been scant debate within baseball about the value of analytics. Almost every recent World Series champion– the Red Sox, Cubs, Royals, Astros, and others — has an analytics success story to tell. By all accounts, its here to stay.

So why on his way out the door in Chicago did Epstein throw a verbal grenade into the baseball fraternity by suggesting analytics have had “a negative impact on the aesthetic value of the game and the entertainment value of the game.” And he specifically cited the responsibility of analytics for the recent rise in strikeouts, bases-on-balls, and home runs (as well as a decline in stolen bases) as the primary cause of baseball’s aesthetic decline.

Is Epstein right? The short answer is: It is not at all clear baseball analytics are the problem, even if it did change the ‘aesthetics’ of the game.

A brief look at the data…

As a fan of baseball, I find bases-on-balls and strike outs near the top of my list of least favorite in-game outcomes.

But when we look at the long-term trends in walks and strike outs, its hard to pin the blame on analytics (see Figure 4). Strike outs in particular have been on a secular rise since the beginning of organized baseball in the 1870s, with only three periods of sustained decreases — the ‘Expansion era,’ ‘Live Ball’ and ‘Steroid’ eras. The ‘Analytics era’ emphasis on hard-throwing strike out pitchers over slower-throwing ‘location’ pitchers may be working (strike outs have gone from 6 to 9 per team per game), but it is part of baseball’s longer-term trend — baseball pitchers have become better at striking out batters since the sport’s beginning. The only times batters have caught up with pitching is when either the baseball itself was altered (“Live Ball era”), pitching talent was watered down (“Expansion era”) or the batters juiced up (“Steroids era”).

As for the rise in bases-on-balls, there is evidence of a trend reversal around 2012, with walks rising sharply between 2012 and 2020, the heart of the ‘Analytics era.’ At least tentatively, therefore, we can conclude one excitement-challenged baseball event has become more prominent, but even in this case, the current number of walks per team per game (= 3.5) is near the historical average. At the bases-on-balls peak in the late-1940s, baseball was at its apex in popularity and MLB attendance declined as bases-on-balls plummeted through the 1950s (see Figure 1).

Figure 4: Trends in Bases-on-Balls and Strike Outs in Major League Baseball since 1871.

It is difficult to blame baseball’s relative decline in popularity on increases in strike outs and walks or the role of analytics in those in-game changes.

But what about two of baseball’s most exciting plays — stolen bases and home runs? According to Epstein, the analytics-caused decline in stolen bases and concomitant rise in home runs has robbed the game of crucial action which help drive fan excitement.

As shown in Figure 5, there is strong evidence that the ‘Analytics era’ has seen a reversal in trends for both stolen bases and home runs. Since 2012, the number of home runs per team per game has risen from 0.9 to 1.3, and the number of steals per team per game has fallen from 0.7 to 0.5.

Stolen bases may be a rarity now in baseball, but they’ve never been common since the ‘Live Ball era,’ having peaked around 0.9 per team per game in the late 1980s. In truth, stolen bases have never been a big part of the game.

Home runs are a different matter. Epstein’s complaint that there are, today, too many home runs in baseball is a puzzling charge. In 45 years as a baseball fan, I’ve yet to hear a fan complain that his or her team hit too many home runs.

Yes, home runs eliminate some of the drama associated with hitting a ball in play — Will the batter stretch a single into a double or a double into a triple? Will the base runner go for third or for home? — but do those in-game aesthetics create more adrenaline or dopamine than the anticipation over whether a well hit ball will go over the fence? I, personally, find it hard to believe that too many home runs are hurting today’s baseball.

But is Epstein right in saying analytics may have played a role in the recent increases in home runs. The answer is an emphatic yes.

As the MLB worked to remove steroids from the game in the late 1990s, the number of home runs per game dropped dramatically…until 2011. As the ‘Analytics era’ has become entrenched in baseball, home runs have increased year-to-year as fast as they did during the heyday of steroids, rising from 0.9 per game per team in 2011 to 1.3 in 2020. In an historical context, professional baseball has never seen as many home runs as it does today.

However, again, in the long-term historical context, the ‘Analytics era’ is just continuing a trend that has existed in baseball since its earliest days. Most batters have always coveted home runs and all pitchers have loathed them — analytics didn’t cause that dynamic.

Figure 5: Trends in Stolen Bases and Home Runs in Major League Baseball since 1871.

The holy grail of baseball analytic metrics is On-base Slugging (OPS) — a comprehensive measure of batter productivity that incorporates more information about how often a batter has multiple base hits.

(and from a defensive perspective, an indicator of how well a team’s pitching and fielding lineup stunts batter productivity).

The highly-regarded OPS is important to baseball analytic gurus because of its strong correlation with the proximal cause of why teams win or lose: The number of runs they score.

Since 1885, the Pearson correlation between OPS and the number of runs per game is 0.56 (which is highly significant at the two-tailed, 0.05 alpha level). And it is on the  OPS metric that the ‘Analytics era’ has made a surprisingly modest impact, hardly large enough to be responsible for harming the popularity of baseball (see Figure 6). If anything, shouldn’t a higher OPS in the aggregate indicate a more exciting type of baseball, even if it includes a larger number of home runs?

Prior to the ‘Analytics era,’ the ‘Steroids era’ (1994 to 2005) witnessed a comparable surge in OPS (and home runs) and the popularity of baseball grew, at least until stories of steroids-use became more prominent in sports media.

Figure 6: Trends in On-base Plus Slugging (OPS) and # of Runs in Major League Baseball since 1871.

Epstein’s pinning baseball’s current troubles on analytics begs the question of what other factors could also be explaining some of the recent changes in the game’s artfulness. These in-game modifications cannot all be dropped at the feet of analytics. The slow pruning out of steroids from the game, shifts in baseball’s young talent pool, the changing tastes of American sports fans, and the growth in other sports entertainment options cannot be ignored.

Final Thoughts

Baseball has real problems, particularly with the new generation of sports fans. The MLB should not under-estimate the negative implications of this problem.

However, the sport is not dying and analytics is not leading it towards a certain death. Analytics did not cause baseball’s systemic problems.

For those who assume major league baseball is a sinking ship, analytics has done little more than re-arrange the deck chairs on the Titanic. However, for those of us who believe baseball is still one of the great forms of sports entertainment, we must admit the sport is dangerously out-of-touch with the modern tastes and appetites of the average American sports fan.

And though analytics may not have helped the sport as much as Moneyball suggested it would, neither has it done the damage Epstein suggests.

  • K.R.K.

Send comments to: nuqum@protonmail.com

Postcript:

This is my favorite scene from Moneyball. It is the point at which head scout Grady Fuson (played by Ken Medlock) confronts Billy Beane (Brad Pitt) over his decision-making style as general manager. Most Moneyball moviegoers (and readers of Lewis’ book) probably view Fuson as the bad guy in the film — a dinosaur unwilling to change with the times. As a statistician whose faced similar confrontations in similar contexts, I see Fuson as a irreplaceable reality check for data wonks who believe hard data trumps experience and intuition. In my career, I found all of those perspectives important.

Fuson asks Beane into the hallway so he can clear the air. Fuson then says to Beane:

“Major League Baseball and its fans would be more than happy to throw you and Google boy under the bus if you keep doing what you’re doing here. You don’t put a team together using a computer.

Baseball isn’t just numbers. It’s not science. If it was, anybody could do what we’re doing, but they can’t because they don’t know what we know. They don’t have our experience and they don’t have our intuition.

You’ve got a kid in there that’s got a degree in economics from Yale and you’ve got a scout here with 29 years of baseball experience.

You’re listening to the wrong one now. There are intangibles that only baseball people understand. You’re discounting what scouts have done for a hundred and fifty years.”

Years later, Fuson would react to how he was portrayed in Lewis’ book and subsequent movie:

“When I was a national cross-checker, I raised my hand numerous times and said, ‘Have you looked at these numbers?’ I had always used numbers. Granted, as the years go on, we’ve got so many more ways of getting numbers. It’s called ‘metrics’ now. And metrics lead to saber-math. Now we have formulas. We have it all now. But historically, I always used numbers. If there’s anything that people perceived right or wrong, it’s that me and Billy are very passionate about what we do. And so when we do speak, the conversation is filled with passion. He even told me when he brought me back, ‘Despite what some people think, I always thought we had healthy, energetic baseball conversations.’”

At times I think people want to believe analytics and professional intuition are mortal enemies. In my experience, one cannot live without the other.

 

Wake up, America! The U.S. is not going bankrupt

[Headline graphic: Photo downloaded flickr.com/photos/68751915@N05/6793826885 (This image is used under the CCA-Share Alike 2.0 Generic license)]

By Kent R. Kroeger (Source: NuQum.com; November 25, 2020)

The recent Twitter exchange by Democratic congresswoman Alexandria Ocasio-Cortez (AOC) and Nikki Haley, the former U.S. Ambassador to the U.N., over whether the U.S. can afford direct payments to U.S. households to help with the economic damage caused by the coronavirus pandemic is a shining example of how our two major political parties can unite in constructive dialogue to solve our nation’s most pressing problems.

I’m just kidding.

The AOC-Haley debate over direct financial aid to Americans suffering due to the pandemic was a poopfest.

AOC and Nikki basically told Joe Biden and his call for national unity to go screw themselves.

The AOC-Haley Twitter exchange lasted only a few snarky tweets:

AOC: To get the (corona)virus under control, we need to pay people to stay home.

Haley: AOC, Are you suggesting you want to pay people to stay home from the money you take by defunding the police? Or was that for the student debts you wanted to pay off, the Green New Deal or Medicare for All? #WhereIsTheMoney

AOC: Nikki, I’m suggesting Republicans find the spine to stand up to their corporate donors & vote for the same measures they did in March, except without the Wall St bailout this time.

And I know you’re confused abt actual governance but police budgets are municipal, not federal.

AOC: Utterly embarrassing that this woman was a governor & still doesn’t have a grasp on public investment. Wonder if she says federal financing works like a piggy bank or household too?

All this faux-seriousness from folks who worship Trump for running the country like his casino.

And if you allow me to judge the winner of the AOC-Haley Twitter spat, AOC won in a first period knockout. It was the 1986 Mike Tyson-Marvis Frazier fight, only with more attractive combatants.

But before I give AOC too much credit, her rebuttal to Haley’s assertion that the U.S. can’t afford to solve America’s most serious problems (i.e., health care costs, student debt, climate change, etc.), realize that AOC is simply repeating an economic argument developed by advocates of Modern Monetary Theory (MMT).

In an over-simplified summary, MMT describes currency as a public monopoly and economic problems such as unemployment as evidence that a currency monopolist is overly restricting the supply of the financial assets needed to pay taxes and satisfy savings desires.

Stony Brook University Professor Stephanie Kelton, a former Bernie Sanders economic advisor, is among MMT’s most visible current advocates.

And, in truth, MMT is not that new. The theory’s core ideas ate not far removed from 100-year-old Chartalist Theory and similar economic arguments have been offered by U.S. economists and bankers for decades.

In a 1993 Harvard Business Review published dialogue, William A. Schreyer, Chairman of the Board Emeritus, Merrill Lynch & Co., Inc., New York, New York, a sharp critic of MMT, still concurred to one of its principle conclusions:

“The federal budget deficit is not the most important threat facing the U.S. economy. When policymakers focus narrowly on the budget deficit, they ignore what truly drives rising prosperity and long-term economic growth, that is, saving and investment. There is real danger in Washington’s myopic fear of the deficit. As we have seen too often in recent years, a focus on deficit-driven government accounting can place growth-oriented economic policy in a straitjacket.”

But there is also no doubt that politicians like Bernie Sanders and AOC are probably most responsible for helping propel MMT-thinking into mainstream political debate.

But before you think I’m a doughy-eyed lefty, think again. I voted for Trump twice, label myself ‘pro-life,’ believe ‘woke’ politics has more to do with politicians finding a new way to milk Americans for more campaign donations than anything else, and remain a healthy skeptic of the doomsday predictions surrounding climate change (though, I am convinced the earth is warming due to human activity and its consequences are real).

AOC would never ask someone like me for my vote.

Nonetheless, AOC’s logic on helping American households deal with the economic consequences of the pandemic is far more coherent than anything Haley or any other major politician have said on the subject. And that includes Nancy Pelosi and Joe Biden.

No country has ever gone bankrupt spending money on solving its most serious problems — and the coronavirus pandemic is that type of problem.

As Prof. Kelton likes to ask, “Did we spend too much money on World War 2?”

All that being said, I remain sensitive to the question: “Can large, long-term federal deficits be a bad thing?”

And my understanding of Prof. Kelton and MMT is that, of course, federal deficits are bad if the government spends the money poorly. Imagine if our $27 trillion national debt had been obtained entirely through the government purchase of solid gold bathtubs for every American street corner. Our currency would be worthless and our economy in a shambles. Buying U.S. debt would be the worst investment option on the planet.

But that’s not how this country has spent the money it has printed. Not even close. Past spending and investment decisions (public and private) have made the U.S. economy the best long-term investment around. Admittedly that could change. A twenty-year military occupation of country with little impact on the world economy or U.S. strategic interests might do that if we give it a chance. But even that questionable investment comes with economic upsides, particularly if you are a military contractor or an MSNBC military “analyst.”

The fact is, simplistic notions of what policies the U.S. can and cannot afford are rooted in, as AOC puts it, a flawed understanding of public investment. Financing the federal deficit is not like a piggy bank or household budget. To treat it as such is to risk doing real and lasting harm to the U.S. economy and the American people.

Therefore, it is time we collectively wake up to the con that the U.S. cannot sustain deficit spending, a deception engineered out of self-interest by politicians from both parties who gain more power by perpetuating it.

The reality is that the U.S. can sustain deficit spending as long as the money is spent wisely and solves real problems.

AOC — 1 … Nikki Haley and the U.S. Political Establishment — 0.

  • K.R.K.

Send comments to: nuqum@protonmail.com

A Elegy for Donald Trump

[Headline graphic: President Donald Trump speaking at the 2018 Conservative Political Action Conference (CPAC) in National Harbor, Maryland (Photo by Gage Skidmore; used under the CCA-Share Alike 2.0 Generic license.)]

By Kent R. Kroeger (Source: NuQum.com; November 6. 2020)

I am a Bernie Sanders-supporting registered Democrat (check the voter databases in Iowa and New Jersey); yet, I never hated Donald Trump, even as everyone around me did. My wife and I stopped talking politics on the morning of November 9, 2016 and my mother-in-law to this day only refers to him as “Number 45” or “Turdface.”

But I see beyond his fake tan and broken syntax and recognize some of the good things Trump did over the past four years: He reset some bad trade agreements. Told our European allies it was time they chipped in their share for mutual defense. Oversaw an unprecedented surge in small business growth. And, most importantly, helped expose the falsehood of the Washington, D.C. trope that large, ceaseless federal budget deficits are inflationary and need to be dealt with through deep cuts in entitlement and discretionary spending. If there is one thing Donald Trump is good at, it is running up debt and still being rich.

Finally, I always knew Trump wasn’t a Russian tool (as that whole three-year, media-fueled hullabaloo was the Oxford Dictionary definition of ‘fake news’). The Washington establishment wanted to discredit Trump from the moment he was elected and the Russiagate story was the perfect pulp fiction to do it.

Truth be told, I was entertained watching Trump annoy the news media. They kept telling me Trump was a liar but that’s not what I saw. Trump was not a fabricator of facts — he was simply a CEO who could only regurgitate new information after he had dumbed it down to a point where it was unrecognizable as discernible fact. That’s not lying. That’s called being wrong. There’s a difference. A big difference.

Kevin Weinberg, a 30-something computer programmer, writer and anime aficionado, posted on social media one of the most honest postmortems I’ve yet seen for soon-to-be former President Donald Trump:

If you love Trump as much as I do, don’t riot. Don’t loot (not that I need to say it, as our side never does this). Accept Trump’s sacrifice for us. He destroyed his life, his business, and his future for his country, so that we could save our Constitution by ending the attack on our courts. I am so sad to see President Trump lose (which he probably will). But we should honor his sacrifice. He changed politics forever.

Republicans are now trending ‘big tent.’ Hispanics are rapidly switching teams. Each election we (the Republicans) gain more Hispanic and Black support. Liberals will eventually be the party of white women with college degrees in gender studies.

Sure, I could quibble with a couple of assertions in Mr. Weinberg’s elegy, but his basic point mirrors the central sentiments I’m hearing and reading across the Trump universe today (for which I now paraphrase): We are sad, but we believe Donald Trump stood for something that is not fairly represented in the national media or among political and economic elites: The American economic and political system is designed for the benefit of the few, not the many.

If I may interject, Donald Trump utterly failed in changing that inequity dynamic, even as I remain sympathetic to the guiding belief that I believe motivates most Trump voters: The American political and economic system doesn’t work for most Americans.

The election of Joe Biden fails to address that problem on every meaningful level. In fact, I think he will significantly set that project back. A Biden administration is a regression-to-the-mean that will re-empower the same forces in the Barack Obama presidency that engineered that fastest growing wealth gap since the Reagan years. Every problem can be solved by government-funded tax credits, block grants, or low-interest loan programs that do more for the balance sheets of corporate America (and the value of stock holdings among corporate executives) than for the household budgets of average Americans.

I am too cynical, but experience tells me, for most of at least, happy days will not be here again under a Biden administration.

Joe Biden is the autonomic reflex reaction of the American people to the perpetually embattled Trump presidency. However, the realities that created the Trump phenomenon in the first place –increased wealth inequality, anxieties over immigration, unbalanced trade agreements, the continued economic marginalization of working class Americans of all races and ethnicities, and a growing sense that the U.S. is in decline relative to the rest of the world — are stronger than ever.

The November 3rd election was an anti-Trump vote, but it in no way represented an endorsement of the Democratic Party, its ideas, or its desire to return to “normal times.”

To the contrary, the Republican Party’s core values remain the ideological fulcrum points for most policy debates in Congress — federal budget deficits are bad, what is good for corporate America is good for Main Street, U.S. military interventions are always motivated by good intentions, etc. — and is in an ideal position to stop any substantive legislation the Biden administration might propose.

As I said in a recent column, “The status quo won in a landslide on Tuesday.” Under Biden, the U.S. will keep bombing the same seven countries we bombed under Obama and Trump. The wealth gap will continue to grow. The U.S. will continue to have an inadequate health care system that denies millions of Americans access to needed health care and leaves millions more vulnerable to financial bankruptcy. And the planet will continue to warm at an alarming rate (see Figure 1 below), even after the U.S. rejoins the Paris Climate Accords.

Figure 1: UAH Satellite-Based Temperature of the Global Lower Atmosphere (through Oct. 31, 2020)

Nobody that cares about this country should be happy about with the status quo.

I will not miss Donald Trump as I am sick of the network news’ daily diet of “What stupid thing did Trump say today?

However, if we are ever going to face the real problems in this country — the problems that gave rise to Trump in the first place — the ‘Chicken Little’-hysteria that surrounded the Trump administration will need to be replaced by something more nuanced and intellectually honest — and, therefore, less attractive to news audiences.

Which is why I have no illusions that anything will actually get better under the next presidential administration, even if the news network anchors (not on the Fox News Channel) insist otherwise.

  • K.R.K.

Send comments to: nuqum@protonmail.com

 

The status quo won in a landslide on Tuesday

[Headline graphic: The Triumph of Caesar: Trumpeters by Jacob of Strasbourg and Benedetto Bordone; Used under the CC0 1.0 Universal Public Domain Dedication license]

By Kent R. Kroeger (Source:  NuQum.com; November 4, 2020)

Conservative pundit George Will said after Barack Obama’s 2008 election victory that the loss was good for the Republican Party: “Republicans are a bad governing party, but an exceptional opposition party.”

Which soon will be evident as the Republicans spend the next four years calling into question the validity of this election and the president and Congress it elected.

And why shouldn’t they? Republicans and their wealthiest constituents win whenever their party can so badly muck up the legislative system nothing substantial or transformative ever gets passed. Doing nothing–the status quo–serves them as well as when they are in power and bear the aggravating responsibility of governing.

While the news networks are saying the final results for Tuesday’s election may not be known for days or even weeks, the actual result was hard-coded into the Constitution, monetary system, federal budget process and U.S. tax code years ago. America’s political and economic elites win every election. Its not necessary to count all of those unprocessed mail-in ballots. We already have our outcome: Hail, Caesar!

Also clear is that the biggest losers on Tuesday were progressives who think their progressive policies will ever get a fair hearing within the U.S. halls of power.

A Green New Deal? You’ll have to wait another four years. Address the looming student debt crisis? Not anytime soon. Shrink the nation’s growing wealth gap? Not gonna happen.

At a unique time in history when no Republican incumbent’s seat should have been safe, Kentucky Senator Mitch McConnell crushed his well-funded opponent by 20 points. One of my close Iowa friends insisted in the weeks before Election Day, Iowa Senator Joni Ernst was doomed. “She’s toast.” Supposedly, Theresa Greenfield, her Democrat opponent, was generating enthusiasm unlike anything they’d ever seen since a young Senator from Illinois traversed the state as he ran for president. Ernst won by almost seven points.

Joe Biden will be the next president, but any expectation of a mandate was crushed Tuesday night. Biden, similar to Trump in 2017, will limp into the presidency, already damaged goods–and that’s after every major news network (less Fox News) was basically a 24/7 Biden ad over the last five or six months. [Yes, there is data to support that charge; you’ll find it here. And don’t get me started on how bad journalism has become in this country. IMHO, we have propaganda outlets, not news organizations.]

If you are a progressive, do not despair, no matter how easy it is at the moment. Instead, now is the time to spiritually free yourself from the shackles of a one-party system that poses as a two-party system. It has failed, again, to give anyone hope for real change…and it will fail in the future as well. Cartoonist Charles Schultz warned us 50 years ago of this reality with his iconic bit where Lucy Van Pelt never lets Charlie Brown kick the football–yet Charlie never loses faith that ‘next time’ he will succeed. Spoiler alert: He doesn’t.

[In this analogy, think of Nancy Pelosi or Charles Schumer as Lucy and Bernie Sanders or Alexandria Ocasio-Cortez as Charlie Brown.]

Do you want an economically rational, universal health care system? Do you want an economic system where our best and brightest aren’t operating under the oppressive cloud of student debt for half of their working life? Do you want to end our pointless forever wars? Do you want to reduce the wealth inequality gap?

If you do, consider abandoning the Democratic Party. They are working for the same entrenched elite interests as the Republican Party. And I suggest a similar consideration for the populists in the Republican Party.

Progressives and populists joining forces? Now that would be an interesting political party.

  • K.R.K.

Send comments to: nuqum@protonmail.com

The 8 things President Biden could do to gain the respect of progressives

[Headline photo by Gage Skidmore; use licensed under licensed under the CCA-Share Alike 2.0 Generic license.]

By Kent R. Kroeger (Source: NuQum.com; November 3, 2020)

No más. I can’t take it anymore.

If you thought the 2016 election was unwatchable, welcome to 2020 —an election where our main choices for president are two de facto Republicans.

Guess which party will win?

Sure, the “official” Republican Party lost control of the presidency and the U.S. Senate, but at the end of the day, its the Democratic Party that has moved to the right. And it is unlikely, based on the last Democratic presidency, that the new president will ever move credibly back to the left.

I predict more government spending disproportionately benefiting corporate balance sheets over American households, an expansion of U.S. military involvements across the globe, our inefficient health care system will stay inefficient, and our

Through their presidential nominee, Joe Biden, the “official” Democratic Party is the living definition of a status quo political organization. Their candidate defends the interests of health insurance and pharmaceutical companies while economically vulnerable Americans disproportionately die from the coronavirus. As our planet continues to warm at an alarming rate, he plays coy on the issue of fracking — does he support it or not?

He defends the interests of the big tech companies even as they’ve been shown to censor content produced by dissenting voices from the left and right. He defends maintaining current troop levels in Afghanistan despite the 18-year war having done little to bring sustainable democratic institutions or economic prosperity to the country; and, instead, has witnessed the Taliban expand its areas of control. In fact, there is no current U.S. military occupation or attempt at regime change that Joe Biden hasn’t promised to maintain or expand and said he would continue the current military budget levels (Recent Biden statements on U.S. military policies are here, here and here).

Joe Biden isn’t an outlier within the Democratic Party. He is a perfect representation of what that party has become since the rise of Bill Clinton.

The last 12 months of media-fueled propaganda attacking Trump (and indirectly supporting Biden) has worked. Like good propaganda, a lot of it is based on fact. Donald Trump has royally bungled the coronavirus crisis. Yet, at its core is a dishonest project to suggest Trump is the cause of America’s ills, when, in fact, he is merely a symptom.

Unsurprisingly, disaffected Democratic and Republicans are not so easily deceived. They remain as disillusioned as ever about the direction of the country under a Biden administration.

In 2016 they wanted to tear down the status quo and in 2020 they are getting it back in a more arrogant form than ever.

Our political system is broken relative to the average American and the people that helped make it that way are about to return to power.

Yes, the Trump experiment mostly failed. He did fulfill some of his promises: renegotiated bad trade agreements, spurred small business growth and employment on a historical level (particularly in minority communities), funded his wall (thanks to a secretively compliant Nancy Pelosi), tore up the Iran Nuclear Deal (which was actually a good agreement), and rolled back environmental regulations on the coal industry (which won’t change the fact that coal is still a dead man walking).

But when the final numbers are tallied, Donald Trump wasn’t the change-agent many thought he would be. He never ended our endless wars. He didn’t force the pharmaceutical companies to face real price competition. He didn’t even fulfill his 2016 campaign pledge to close the carried interest tax loophole for hedge fund managers.

Trump can blame the Democrats in Congress for some of these failures, but just as I was a harsh critic of Barack Obama and his inability to negotiate with a hostile Congress, I hold Trump to the same standard.

In business it is often said, ‘You have to give something to get something.’ Our current dysfunctional political system is incapable of such reciprocity. We are all to blame for that.

Nonetheless, I sit here today forced to accept the fact that Joe Biden — a self-described ‘deficit hawk’ and proud military ‘interventionist’ —is going to sit in the Oval Office for four years (health willing).

I try to keep my spirits up by believing that Biden will be a better president than he was as a U.S. Senator and vice president. I’d like to believe there are instances in history when a politician’s past performance did not predict his or her performance as president. If you can think of one, please let me know.

The evidence is overwhelming that our next president, in the recent past, willfully turned a blind eye to his own son’s capacious appetite for profiting from his father’s political office. That ain’t Russian propaganda, folks. That is a cold, hard fact.

Still, with my expectations planted in reality, I want to believe there is a sluggers chance that a Biden administration could do some socially progressive things.

Hence, I have come up with 8 policies that, should a Biden administration implement them, would lead me to reconsider my attitude towards him.

For each of these 8 policies, I’ve included my guess as to the probability they could be achieved. Additionally, I’ve tried to include mostly policy ideas that are currently supported by a majority of Americans based on recent opinion survey data (e.g., the 2019 and 2020 Pilot Surveys conducted by the American National Election Studies).

The following are mainstream policy ideas.

Let us get started…

(1) Remove American combat troops from Syria, Iraq, and Afghanistan.

It is hard to believe after over a decade of U.S. combat forces in Iraq, Afghanistan (and oil and gas generating portions of Syria) that we are still having this debate. Apart from removing the Saddam Hussein and the Taliban from power, the U.S. and its allies have accomplished little by remaining in these countries.

“The Taliban controls more territory than at any time since the U.S.-led invasion in 2001 toppled the fundamentalist group from power,” says Middle East-based journalist Frud Bezhan.

So why are we still there? The cynical answer is: U.S. contractors are enriched by these occupations. The more polite answer is that the costs in keeping of U.S. troops in Afghanistan are not large enough to force a withdrawal.

The Biden campaign’s cryptic statements on Syria are the most chilling of his military policy ideas. According to his campaign website, Biden promises in Syria to stand “with civil society and pro-democracy partners on the ground…and ensure the U.S. is leading the global coalition to defeat ISIS and use what leverage we have in the region to help shape a political settlement to give more Syrians a voice.”

Biden and U.S. military leaders always fail to mention that Iran and Bashar al-Assad’s Syrian forces have killed more ISIS fighters in the world than any other military force. In fact, the rise of ISIS can firmly be laid at the feet of the Obama administration.

According to writer Robert Morris, who has written extensively on U.S. foreign policy in the Middle East, one of the most pernicious and widespread myths is that the Obama administration’s reduction of combat troops in Iraq led to the rise of ISIS.

“Those who are trying to keep U.S. troops in Syria rely heavily on this myth,” says Morris. “It is central to the foreign policy ideas of both parties, and the ideology and future plans of the entire U.S. foreign policy establishment.”

The problem is that this myth about ISIS’ rise is “95 to 99 percent bullsh*t,” says Morris. “Obama’s Iraq withdrawal did not create the Islamic State (ISIS), but his intervention in Syria almost certainly did.”

Biden’s infrequent campaign mentions of Syria indicate he’s prepared to reimpose an interventionist, anti-regime policy that failed to overthrow Assad the first time, but successfully destabilized Syria and led to the death of almost 400,000 Syrians.

If establishment Democrats like Biden are consistent on anything, it is in their unwillingness to upset the military-industrial complex and our country’s foreign policy brain trust, even when they are demonstrably incompetent, as they have been with respect to the Middle East.

The chance a Biden administration would end these military adventures in any one of these countries? Close to zero.

(2) End US military involvement in Saudi Arabia/UAE’s war in Yemen

On the surface, this may be the one international conflict in which a Biden administration could do the right thing. The U.S. (and other European allies) supply Saudi Arabia and the United Arab Emirates (UAE) with significant intelligence and logistical support in their nearly six-year effort to remove the Houthis, who are Shia Muslims, from power in northeast Yemen.

To date, according to the Yemen Data Project, Saudi-coalition air raids have killed nearly 9,000 Yemenis and have created one of the world’s most dire humanitarian disasters.

In an apparent response to this crisis, the U.S. House and Senate voted nearly two years ago to condemn and end the Trump administration’s support for the Saudi-coalitions efforts in Yemen (which, in fact, had started under the Obama administration).

Has the issue of Yemen been prominently raised in the 2020 presidential campaign?

Of course not. So don’t expect a Biden administration to do anything to upset the status quo in that region. Saudi Arabia and the UAE are close allies to the U.S. and the Iranian-allied Houthis are not.

The chance a Biden administration ends our support for the Saudi-UAE war on Yemen: 10%.

(3) Rejoin the Iran Nuclear Deal (as negotiated by the Obama administration) and end sanctions immediately.

I have little positive to say about the Obama administration, but when it comes to the Iran Nuclear Deal —known formally as the Joint Comprehensive Plan of Action (JCPOA) which was signed in July, 2015 —the previous administration hit a solid Texas League single. The JCPOA wasn’t perfect, but by bringing the Iranians into the constraints of an international agreement on nuclear weapons development, the Obama administration moved the ball forward on Middle East peace. Something the Clinton and G. W. Bush administrations did not do.

However, Trump’s short-sighted destruction of that deal has achieved nothing, except to move the region closer to a dangerous, full-scale ‘hot war’—which, though further enriching the U.S. defense contractors, sends shivers down the spine of global trading interests.

A war with Iran would likely be a far bigger foreign policy debacle than G. W. Bush’s unnecessary war with Iraq in 2004.

Thankfully, Biden (the candidate) has made rational statements about Iran during the 2020 campaign. “There’s a smarter way to be tough on Iran,” says candidate Biden. “This past month (August) has proven that Trump’s Iran policy is a dangerous failure. At the United Nations, Trump could not rally a single one of America’s closest allies to extend the UN arms embargo on Iran. Next, Trump tried to unilaterally reimpose UN sanctions on Iran, only to have virtually all the UN security council members unite to reject his gambit. Now there are reports that Iran has stockpiled 10 times as much enriched uranium as it had when President Barack Obama and I left office. We urgently need to change course.”

I don’t know what “reports” Biden is referencing, but I do believe Biden is prepared to reverse the significant damage Trump has done in the international community’s efforts to slow down Iran’s nuclear ambitions.

But not only should Biden recognize the dangers of Trump’s aggressive Iran policies, JCPOA was one of the Obama administration’s genuine foreign policy successes and reviving it would not require any contentious battles with Congress. Bringing the Iran Nuclear Deal back from the dead should be a no-brainer.

The chance the Biden administration rejoins the JCPOA and ends sanctions against Iran: An optimistic 75%.

(4) Pass a student debt relief program that forgives a substantial proportion of debt for the neediest students and reduces interest rates for others.

Student debt is a $1.6 trillion crisis waiting to happen and more than 30% of student loan borrowers are in default, late or have stopped making payments six years after graduation.

Forty-four million Americans are directly under the thumb of this debt burden, but all Americans feel the effects of this mess as more and more student debtors are delaying the traditional milestones of adulthood: marriage, children and home ownership.

A 2015 survey by Bankrate.com found that 21 percent of student debtors have delayed marriage, 26 percent have pushed back having children, and 36 percent have put off buying a home.

Those choices have measurable outcomes throughout the economy, and most of them are negative.

Where has candidate Biden stood on the issue of student debt? To my surprise, this is one issue where the Biden campaign has been fairly concrete.

For example, Biden proposes changing the Public Service Loan Forgiveness (PSLF) program, where currently the remaining debt is forgiven after 10 years of payments, to a program where $10,000 of federal student loan debt is forgiven each year for up to five years.

Biden has also proposed a income-based loan repayment program that would cut monthly loan payments in half compared to the Pay-As-You-Earn Repayment (PAYE), which was created under the Obama administration program and has the lowest monthly and total payments of any other income-driven repayment plans.

More broadly, Biden has proposed forgiving all tuition-related undergraduate federal student loan debt for borrowers who attended public colleges or Historically Black Colleges and Universities (HBCUs) and who earn less than $125,000 per year. Joe Biden has also said that he supports the $10,000 in federal student loan forgiveness proposal recently introduced by House Democrats.

Finally, Biden wants to restore bankruptcy discharge rights to student loans, which would allows debtors to stop paying their student loans if payments on those loans “impose an undue hardship” on the student and his or her dependents. [I’ll be nice and won’t mention that it was Senator Biden who helped rollback bankruptcy protection for millions of average Americans right before the 2008 recession. I won’t, but here is someone who will.]

All things considered, I believe Biden’s campaign rhetoric on student debt relief has been refreshingly specific and credible.

The chance the Biden administration passes a substantive student debt bill: a hopeful 50 percent.

(5) Decriminalize most drug possession offenses; stop using the justice system to help users and move treatment to the social services and mental health communities

Nowhere is Biden’s congressional record more disconnected from current public sentiment than when it comes to U.S. crime policy. From his first days as a House member through his Senate career, Biden has aggressively positioned himself as a “crime fighter.” In that effort, Biden frequently cites the 1994 “Biden” Crime Bill, as he once called it, as his greatest legislative achievement.

The 1994 Crime Bill, the largest crime bill in U.S. history, provided for 100,000 new police officers, $9.7 billion in funding for prisons and $6.1 billion in funding for prevention programs, which were designed with significant input from experienced police officers. The bill also eliminated Pell Grants for prison inmates, criminalized gang membership, established a three-strikes provision that mandated life sentences for people with two or more violent felony convictions, and gave states incentives to lengthen sentences, including for drug possession offenses.

However, not everyone who has suffered from drug addiction or lives in a minority community shares Biden’s love for the 1994 Crime Bill.

Leading into the 2016 election, activist Jeremy Haile, the federal advocacy counsel at the Sentencing Project, said, “Any Democrat that is interested in gaining support among the current electorate, particularly the progressive civil rights communities, is going to have to say that previous tough-on-crime policies were a mistake.”

The Black Lives Matter movement of 2020 only amplifies Haile’s earlier statement. “Many of us who grew up in the black community in the ’90s,” Patrisse Cullors, a political organizer and co-founder of the Black Lives Matter movement, told the New York Times. “We witnessed the wave in which the policies that came from both federal government but also local government tore our families apart.”

And what does Biden still say about the 1994 Crime Bill?

Asked recently in a televised town hall, Biden admitted the 1994 Crime Bill was a “mistake.” However, just a year ago he was still defending the bill.

To be fair, the most impactful crime bills were passed in the two decades prior to the 1994 Crime Bill (my analysis on that topic can be found here). Nonetheless, at every opportunity, both as a House and Senate member, Biden has proudly voted for tougher crime laws. That position may still serve him well with the majority of Americans, including populists, but to left-leaning progressives, Biden is not on their side of the issue.

And, as president, I think Biden’s legislative record will remain faithful to his ‘tough-on-crime’ past, even if his rhetoric will be all over the map.

The chance the Biden administration pushes for the decriminalization of most drug possession sentences and takes the criminal justice system out of the drug enforcement process: No chance.

(6) Substantive reform of the U.S. health care system, including at least one of the following policies: (a) reducing Medicare eligibility to 55 years of age, (b) extending Medicare to all dependent children, or (c) offering all Americans the option to buy into the Medicare program through “Obamacare” or their employer

Biden has been clear on this. He will work to restore those features of the Obama administration’s Affordable Care Act (“Obamacare”) that were rolled back by the Trump administration. Beyond that, he has promised only marginal changes to the U.S. health care system and has rejected any call for universal health care. Biden does not support “Medicare for All” and will oppose any effort coming close to it.

That said, he has coyly suggested “a public option like Medicare” could be added to “Obamacare” (though the details of this public option are thinly described on his campaign website) and he has proposed lowering the Medicare age eligibility to 60 years of age (down from the current 65).

However, the coronavirus has revealed the deep, systemic flaws in the U.S. health care system and the disproportionate burden this pandemic has placed on low- and middle-income households who cannot afford potential out-of-pocket expenses related to coronavirus treatment.

Biden is right when he says Americans are dying because of current U.S. health care policies under Trump. What Biden doesn’t tell you is that most of those policies have been a joint, 70-year project by the Democratic and Republican parties to protect health insurance companies, health care providers, and pharmaceutical companies from universal health care. Tweaking our ailing health care system, as Biden proposes to do, will not significantly improve U.S. health outcomes related to the coronavirus or any other health problem.

In the end, Biden will never support policies moving the U.S. significantly closer to a universal health care.

The chance the Biden administration passes substantive health care legislation: While there is a fair chance (25%) that the Medicare age eligibility standard will be lowered, the offering of a genuine “public option” or universal health care for all child dependents is not going to happen on Biden’s watch.

(7) Give U.S. households a federal tax credit for purchasing an all-electric vehicle; and an additional tax credit or cash incentive for a household to simultaneously trade in an existing combustion engine vehicle

I had to put one easy chip shot for Biden on this list. This is it: Restoring and expanding a federal tax credit for purchasing an all-electric vehicle, as well as reviving the “Cash for Clunkers” program.

Biden has already endorsed these ideas that are aimed at helping move the U.S. towards a green economy and help meet the UN’s Intergovernmental Panel on Climate Change’s goal of global net human-caused carbon dioxide (CO2) emissions reaching ‘net zero’ by 2050.

Electric cars alone won’t get us to that goal, but with the continued decline of coal-based electricity generation, the increased greening of U.S. corporate energy consumption and recent advancements in carbon-capture and sequestration technologies, the U.S. is going to significantly move the ball forward on combating climate under a Biden administration. [Yes, I know Biden supports fracking, but that is small turnips compared to an all-electric U.S. vehicle fleet.]

The chance these two electric vehicle policies become law in a Biden administration: A strong 90%.

(8) Pardon Wikileaks Founder Julian Assange

Just when it seemed like I was warming up to the incoming Biden administration, the topic of press freedom and the U.S. government’s current effort to prosecute news publisher Julian Assange for publishing classified U.S. documents related to the Iraq War, brings those good feelings to an abrupt halt.

Sadly, I don’t need to single out Joe Biden on this issue. I can’t name a single U.S. politician, apart from Hawaii Representative Tulsi Gabbard, who has stood firmly by our First Amendment and consistently supported the release of Assange. If he is tried and convicted in a U.S. court of crimes under the 1917 Espionage Act, the First Amendment rights of all Americans will be diminished.

What has Biden said about Assange? He called Assange a “high-tech terrorist.”

As is so often the case with Biden, he takes a meager understanding of the facts and exploits our worst instincts and biases to gain cheap political points.

All Americans, not just Biden, would benefit by learning the facts surrounding Assange. [You can find a good, balanced summary of the Assange case here by former New York Times reporter James Risen.]

The facts, as they are known today, show that Assange did not conspire with the Russians to defeat Hillary Clinton in 2016 (which, by the way, has nothing to do with the charges that keeps Assange in a U.K. prison, but does have a lot to do with why establishment Democrats are willing to damage our First Amendment protections for petty political purposes). He did not offer assistance to Chelsea Manning on how to anonymously infiltrate classified U.S. intelligence systems (which is among the charges against him). Finally, Assange and Wikileaks did not expose U.S. intelligence assets in the Middle East (or elsewhere) to harm, as often charged in the mainstream media. To the contrary, the facts consistently show how diligent and thorough Assange and Wikileaks were in redacting the names of intelligence assets from the Wikileaks-released documents.

What Assange did do is publish accurate information that exposed potential U.S. war crimes in Iraq and, most certainly, revealed facts about the U.S. military occupation of Iraq that reflected negatively on the U.S. government. What Assange and Wikileaks did is fundamentally no different than the New York Times and Washington Post publishing The Pentagon Papers almost 50 years ago. How ironic is that those two news organizations have been largely silent on the First Amendment implications of the his case?

The central character in The Pentagon Papers drama, former Pentagon intelligence officer Daniel Ellsberg, offers a biting analysis of the Assange case that you can watch here from The Jimmy Dore Show. [And is there any bigger indictment of today’s mainstream journalism than the fact that you can get better, more accurate information on the Assange case from watching the podcast of a jagoff nightclub comedian?]

The chance the Biden administration will pardon Assange: Negative zero.

Final Thoughts

One of the most demoralizing features of Joe Biden is how his 2020 campaign rhetoric contradicts much of his legislative record. He voted for harsher drug penalties before he came out against them as a presidential candidate. As a candidate, Biden said he wouldn’t touch Social Security or Medicare, even though as a ‘deficit-hawk’ Senator he spoke repeatedly about his support for putting those programs under the budgetary knife in the name of lowering the national debt. He opposed fracking before he recently came out for it — a stance he then clarified to mean he was always for it even when he was against it.

The national media has pulled double-duty in protecting the American voter from knowledge of these numerous inconsistencies between candidate Biden and his legislative record.

Unfortunately, as president, it will be much harder (though not impossible) for Biden to hide those inconsistencies from the millions of progressives who are rightfully cynical towards him at the start of his new administration.

  • K.R.K.

Send comments to: nuqum@protonmail.com