Skip to content


Culture war games: a kind of class double-think

What explains the genius of the American Founders?
By Gordon S. Wood

English politics were dominated by about 400 noble families whose fabulous scale of landed wealth, political influence and aristocratic grandeur was unmatched by anyone in North America. The English aristocrats were arrogant, complacent about their constitution and unwilling to think freshly about most things. When they thought about the outlying and underdeveloped provinces of their greater British world at all, they tended to look down upon them with disdain. In the eyes of the English ruling class, not only North America but also Scotland was contemptible and barely civilized.

Yet at the same time, both the Scots and the Americans knew that the polite and sophisticated metropolitan center of the empire was steeped in luxury and corruption. England had sprawling, poverty-ridden cities, overrefined manners, gross inequalities of rank, complex divisions of labor and widespread manufacturing of luxuries — all symptoms of social decay.

The Dream Hoarders
By Richard V. Reeves

I am British by birth, but I have lived in the United States since 2012 and became a citizen in late 2016. There are lots of reasons I have made America my home, but one of them is the American ideal of opportunity. I always hated the walls created by social class distinctions in the United Kingdom. The American ideal of a classless society is, to me, a deeply attractive one. It has been disheartening to learn that the class structure of my new homeland is, if anything, more rigid than the one I left behind and especially so at the top.

Indeed, the American upper middle class is leaving everyone else in the dust. The top fifth of U.S. households saw a $4 trillion increase in pretax income in the years between 1979 and 2013. The combined rise for the bottom 80 percent, by comparison, was just over $3 trillion. The gap between the bottom fifth and the middle fifth has not widened at all. In fact, there has been no increase in inequality below the eightieth percentile. All the inequality action is above that line.

Income growth has not been uniform within the top fifth, of course: a third of the income rise went to the top 1 percent alone. But that still left $2.7 trillion for the 19 percent just beneath them. Failing to join the ranks of the plutocrats does not mean life as a pauper.

Very often it seems to be those quite near the top of the distribution who are most angry with those at the very top: more than a third of the demonstrators on the May Day “Occupy” march in 2011 had annual earnings of more than $100,000.

An individual billionaire can have a disproportionate influence on an individual politician (in Donald Trump’s case, by becoming one). But the size and strength of the upper middle class means that it can reshape cities, dominate the education system, and transform the labor market. The upper middle class also has a huge influence on public discourse, counting among its members most journalists, think-tank scholars, TV editors, professors, and pundits in the land.

The typical child born and raised in the American upper middle class is raised in a stable home by well-educated, married parents, lives in a great neighborhood, and attends the area’s best schools. They develop a wide range of skills and gain an impressive array of credentials. Upper middle-class children luck out right from the start, even though this country was founded on antihereditary principles.

But while the inheritance of titles or positions remains forbidden, the persistence of class status across generations in the United States is very strong. Too strong, in fact, for a society that prides itself on social mobility.

As inequality between the upper middle class and the rest grows, parents will become more determined to ensure their children stay near the top. We will work hard to put a “glass floor” under them, to prevent them from falling down the chutes. Inequality and immobility thus become self-reinforcing.

The problem we face is not just class separation, but class perpetuation.

In a market economy, the people who develop the skills and attributes valued in the market will have better outcomes. That probably sounds kind of obvious. But it has important implications. It means, for example, that we can have a meritocratic market in a deeply unfair society, if “merit” is developed highly unequally and largely as a result of the lottery of birth.

Postsecondary education in particular has become an “inequality machine.” As more ordinary people have earned college degrees, upper middle-class families have simply upped the ante. Postgraduate qualifications are now the key to maintaining upper middle-class status. The upper middle class gains most of its status not by exploiting others but by exploiting its own skills. But when the income gap of one generation is converted into an opportunity gap for the next, economic inequality hardens into class stratification.

Class rigidities of this kind may blunt market dynamism by reducing the upward flow of talent and leaving human capital underutilized among the less fortunate. To take just one narrow example, fund managers from poor backgrounds perform better than those from more affluent families, controlling for a range of institutional factors, according to one study. It seems likely that this is because they have to be smarter in the first place in order to make it into financial services.

We also engage in some opportunity hoarding, accessing valuable, finite opportunities by unfair means. This amounts to rigging the market in our favor.

When we hoard opportunities, we help our own children but hurt others by reducing their chances of securing those opportunities. Every college place or internship that goes to one of our kids because of a legacy bias or personal connection is one less available to others. We may prefer not to dwell on the unfairness here, but that’s simply a moral failing on our part. Too many upper middle-class Americans still insist that their success, or the success of their children, stems entirely from brilliance and tenacity; “born on third base, thinking they hit a triple,” in football coach Barry Switzer’s vivid phrase.

Three opportunity hoarding mechanisms stand out in particular: exclusionary zoning in residential areas; unfair mechanisms influencing college admissions, including legacy preferences; and the informal allocation of internships.

Opportunity hoarding is bad for society in the same way that commercial market rigging is bad for the economy. It is good that parents want the best for their kids, just as it is good that company directors want to make profits. But companies should make their profits by competing fairly in the marketplace. That’s why we stop them from forming cartels. In just the same way, we need to stop parents from rigging the market to benefit their own kids. Right now, the markets that shape opportunity, especially in housing and education, are rigged in our favor.

The American dream is not about superwealth or celebrity. The American dream is of a decent home in a pleasant neighborhood, good schools for our kids, a steadily rising income, and enough money put aside for an enjoyable retirement. It is about sustaining a strong family and seeing your children off to a good college.

It has become a staple of politicians to declare the American dream dying or dead. But it is not dead. It is alive and well; but it is being hoarded by those of us in the upper middle class.

Stop Pretending You’re Not Rich
By Richard V. Reeves

There’s a kind of class double-think going on here. On the one hand, upper-middle-class Americans believe they are operating in a meritocracy (a belief that allows them to feel entitled to their winnings); on the other hand, they constantly engage in antimeritocratic behavior in order to give their own children a leg up. To the extent that there is any ethical deliberation, it usually results in a justification along the lines of “Well, maybe it’s wrong, but everyone’s doing it.”

The United States is the only nation in the world, for example, where it is easier to get into college if one of your parents happened to go there. Oxford and Cambridge ditched legacy preferences in the middle of the last century. The existence of such an unfair hereditary practice in 21st-century America is startling in itself. But I have been more shocked by the way that even supposedly liberal members of the upper middle class seem to have no qualms about benefiting from it.

The upper middle class is also doing lots right, not least when it comes to creating a stable family environment and being engaged parents. These are behaviors we want to spread, not stop. Nobody should feel bad for working hard to raise their kids well.

Things turn ugly, however, when the upper middle class starts to rig markets in its own favor, to the detriment of others. Take housing, perhaps the most significant example. Exclusionary zoning practices allow the upper middle class to live in enclaves. Gated communities, in effect, even if the gates are not visible. Since schools typically draw from their surrounding area, the physical separation of upper-middle-class neighborhoods is replicated in the classroom. Good schools make the area more desirable, further inflating the value of our houses. The federal tax system gives us a handout, through the mortgage-interest deduction, to help us purchase these pricey homes. For the upper middle classes, regardless of their professed political preferences, zoning, wealth, tax deductions and educational opportunity reinforce one another in a virtuous cycle.

Top 20 Percent Of Americans ‘Hoard The American Dream’
By NPR

INSKEEP: OK. You come from a country with princes and princesses and dukes and duchesses and knights, and you said you were surprised by how class bound the United States is. What are you talking about?

REEVES: Yeah. You know, I never thought I’d say this, but I sort of miss the class consciousness of my old country which I grew up hating. The reason I miss it is because at least we’re aware of it. It seems to me that in the U.S., you have a class system that operates every bit as ruthlessly as the British class system but under the veneer of classless meritocracy. So there isn’t even a self-awareness.

INSKEEP: Have we touched the main reasons that you say the system is rigged, housing and education?

REEVES: Housing, education and then entry into work – the workplace. So we see that from one survey of colleges, just 3 in 5 graduating college students have done an internship. But the way those internships are handed out are very, very far from meritocratic. So to the extent that they’re valuable opportunities, that is yet another way in which we rig the system.

INSKEEP: Where did this system come from? Did somebody invent it?

REEVES: Many of the mechanisms actually have racist roots. So the zoning laws, quite often they had racist origins, but they now work for class.

INSKEEP: Oh, saying, for example, everybody has to have a 1-acre home lot gets lower class apartments out of your neighborhood, for example.

REEVES: Absolutely, keeps them out, yes. Legacy preferences, which are one of the ways that college admissions are rigged. Legacy preferences are genuinely extraordinary in the literal sense.

INSKEEP: Meaning if your mother went or your father went, you can go.

REEVES: Yeah. And they’re literally extraordinary in the sense that no other country in the world does that. Unthinkable in the land of my birth that…

INSKEEP: Unthinkable in Britain, really?

REEVES: Unthinkable, disappeared half a century ago if not more ago. They’re unthinkable everywhere, Steve, except the United States of America. There were actually an attempt by elite colleges to keep Jewish students out.

And so what you see is the sort of mechanisms that were evolved for different reasons. Now they interweave with each other to create a deeply unequal society.

INSKEEP: So to what extent are the trends you describe driving the history we’re living right now?

REEVES: I have come to believe that the dangerous separation of the American upper middle class from the rest of society is a huge problem for politics because there’s a sense of a bubble. There’s a sense of people who are kind of making out pretty well from current trends and who are increasingly separate occupationally, residentially, educationally and economically from the rest of society.

They are also disproportionately powerful. And the fact that they are not only separate from the rest of society but unaware of the degree to which the system works in their favor strikes me as one of the most dangerous political facts of our time.

INSKEEP: Is this trend why Donald Trump became president of the United States?

REEVES: I do think the – to the extent that there was a kind of anger that lay behind Trump’s support and anger at a certain class, I think it’s pretty clear that that anger was at us. It was at the upper middle class.

And I’m not legitimizing that anger at all, but I am pointing out that to some extent the system is in fact rigged in our favor. And unless we’re willing to admit that and do something about it, then to some extent people are right to be angry at us.

The Symbolic Professions Are Super WEIRD
By Musa al-Gharbi

Symbolic capitalists are largely concentrated in specialized hubs tied to global circuits for various symbolic economy industries. Living in these hubs changes individuals and how they relate to others.

As sociologist Georg Simmel explained, cities are liberating in part because people are forced to traverse different social contexts over the course of any given day — and to occupy different social roles and interact with different social networks across these contexts — amidst untold numbers of others, each pursuing their own agendas largely independent of (and indifferent to) one’s own.

In order to manage the complexity and diversity of their social milieu, Simmel argued, city dwellers tend to cultivate an intellectualized distance from most phenomena they encounter. They abstract away from the particulars (which become untenable to track or pay attention to at scale). They adopt a blasé attitude towards most situations and individuals they’re confronted with (you’re faced with unusual people, bizarre encounters, and alien modes of self-presentation on a regular basis). Relationships become increasingly transactional and contingent (it’s impossible to meaningfully invest in most people you interact with due to the sheer number of contacts one has to maintain and the increased transience of where people live, what they do, and so on).

On the one hand, these dispositions give city dwellers a lot more freedom to engage in various forms of social deviance with minimal consequences or judgement: for the most part, no one is paying attention to what you’re doing, and no one really cares in any event. However, these same attitudes also make it difficult to cultivate deep and durable relationships or to convince others to support one’s preferred causes (precisely because no one is really paying attention to or caring about what you’re doing — they have other fish to fry).

In order to secure and sustain the interest of potential mates, allies, employers or benefactors – and indeed, in order to see oneself as an interesting and worthwhile person – city dwellers feel a strong need to cultivate unique personas. It becomes a psychological and practical imperative to differentiate oneself from the masses that everyone crosses paths with daily, and to set oneself apart from the other potential employees, life partners, or comrades that others could just as easily associate with instead of oneself.

As sociologist Elizabeth Currid-Halkett demonstrated, denizens of symbolic economy hubs spend far more of their money on things like fancy clothes, makeup, cosmetic surgery and so on — symbolic capitalists more than most — in a bid to stand out from the crowd. We also gravitate towards extreme and unusual moral, cultural, and ideological postures to distinguish ourselves from “lesser” people.

Meet the Symbolic Capitalists
By Musa al-Gharbi

Among symbolic capitalists, ‘wokeness’ has come to serve as sign that someone is of an elite background or is well-educated. Through espousing ‘woke’ beliefs, symbolic capitalists (and aspirants to the symbolic professions) demonstrate that they the kind of person who ‘plays ball’ — they are aware of, and are willing and able to competently execute, the appropriate scripts for cultural and intellectual elites in response to various cues. That is, ‘wokeness’ is increasingly a means of identifying who is part of ‘the club’ — and it provides a basis for deeming those who are not part of the club as being unworthy of symbolic capital (i.e. people who fail to embrace elite conceptions of ‘social justice’ are held to be underserving of honor, fame, prestige, deference, etc.).

Musa al-Gharbi: We Have Never Been Woke
By No Small Endeavor

The fundamental tension is that while a lot of us are committed to social justice, we also really want to be elites.

We do want to see the oppressed be liberated. We do want to see the poor be uplifted in all of this. But, we want to enjoy a higher position than most other people, driving us to taking these more symbolic gestures instead of doing things that we could do in our own lives that could make a difference.

In a lot of sociological circles and activist circles, there’s this strong emphasis on white privilege, on getting people to acknowledge and confess their privilege …

But, one of the ways in which this tendency to encourage people to confess their privilege can actually distort people’s understanding of the social world is that this emphasis that all whites have the same privilege and that white people, no matter who you are, whether you’re a poor person who lives in a small town and is underemployed, or whether you’re a white affluent professor teaching at Harvard University, that you both benefit the same from your whiteness, that’s actually not an apt understanding of the social world.

So there’s a lot of empirical research that shows that When you educate people about privilege, it doesn’t … actually lead them to think any better or behave any differently towards non white people. What it does is it leads them to dismiss and disparage and write off the suffering of poor white people.

So basically, if you teach people that all white people have this privilege, then what a lot of people take from that is if you’re white, and you’re poor, then you actually really, truly deserve your poverty for not making good use of your privilege. That’s actually the main impact of this teaching, according to the best available empirical research.

Yeah, so when you look at who’s most likely to accept or espouse what you might think of as woke views or woke modes of political engagement, etc. It tends to be a pretty narrow and unusual slice of society. In particular, highly educated, relatively affluent people who live in cities or suburbs and are tied to the knowledge economy professions in some way.

So people who work. or aspire to work in jobs like media and finance or science and technology or education and so on and so forth. One of the things that’s striking about a lot of these professions is that from the beginning of when these jobs were … created, the high pay that we receive, the social prestige that people in these jobs receive, the autonomy we have compared to other employees, these were justified on the basis of the fact that these professions that we work in are fundamentally altruistic and that we serve the common good.

And so you should give us more pay. You should give us more autonomy should give us more respect and prestige because we’ll use that to help everyone in society, including and especially the least advantaged.

So the fact that these jobs have been justified this way, that our salary and our prestige are justified this way creates this really unusual mode of competition within these specific professions, within the specific slice of society, where people … who do a really good job at presenting themselves as altruistic, as committed to helping people who need help, and so on, are viewed as being especially worthy, whereas people who are successfully painted as being beholden to special interests or inordinately concerned about themselves holding the wrong kind of views, those people are viewed as being unworthy of power, of status, unworthy of even keeping their jobs, which is, and they often lose their jobs when they’re successfully painted that way.

So instead of just competing to prove that you are the best advocate or ally for disadvantaged groups, a growing number of people who are in these professions claim to be literal embodiments of different disadvantaged groups or claim to be both allies and victims of various forms of social injustice. This created a whole secondary arms race for status and power and influence within the symbolic professions, and so, growing numbers of elites today. Especially elites tied to these specific professions, arts and entertainment, law, finance, education, science, technology, and so on, try to find different ways of describing themselves as having belonged to some kind of historically marginalized and disadvantaged group.

So nonwhite, … female, neurodivergent, disabled, LGBTQ, the kind of sphere of, of different ways that you can be LGBTQ, of different ways that you can qualify as disabled, or neurodivergent, and so on, also keep expanding, um, allowing a larger and larger strata of the elite to present themselves as kind of marginalized and disadvantaged instead of being elites.

And this is actually a problem for a lot of social analysis, because the way that a lot of social scientists talk and think about problems like inequality, or talk and think about elites is they talk about people like Elon Musk or Jeff Bezos, not about people like Oprah Winfrey or Jay Z, who are also billionaires, but if you’re a nonwhite billionaire, or if you’re a queer or female, neurodivergent or disabled, elite in general, you’re often excluded from analysis so that the focus is on cisgender, heterosexual, able-bodied, … um, neurotypical white men. And this is a problem because in a world where growing numbers of elites don’t identify as that, if you only focus on those people, then you have this increasingly impoverished understanding of who the elites are, of who benefits from the social order, of how different social problems and different social phenomena reproduce over time.

So anti woke symbolic capitalists, people who move aggressively in the opposite direction of most of their peers, they often do actually portray themselves as being committed to feminism and gay rights and anti racism and related causes. They just think that those people, the wokes, are doing it wrong.

But then an example that I talk about in the book is that a lot of anti woke people, for instance, align themselves with Martin Luther King and his vision for, as they understand it, colorblind approach to racial justice, to transcending our racial differences.

But what’s interesting about that, is that a lot of the people who say, oh, we shouldn’t be … doing Black Lives Matter style anti-racism, we should be doing Martin Luther King style anti-racism. Okay, but are they doing Martin Luther King style anti-racism? Are they trying to organize or participate in marches like the Poor People’s March and things like this?

Are they trying to unite working people across racial lines to address social problems? No. They’re doing the same thing that they’re criticizing a lot of the woke people for, which is sitting in their armchairs, railing about social problems. They think that criticizing the woke people, they equivocate that with doing something, when it’s not doing something any more than people changing their Instagram thing is doing something.

There’s a lot of symmetry that I underline in the book between a lot of these different symbolic capitalists. They often share the same mindsets, they have the same kinds of lifestyles. They live in the same kinds of communities. And so there’s not, for our purposes, there’s not a big difference.

And one thing that’s important is I think a lot of times when people are committed to activism or committed to making change in social problems, what they’d like to do, is find some way to … address the social problem that doesn’t require any kind of meaningful sacrifice or change on their part. The idea is, especially, the idea that’s most popular on the left among mainstream symbolic capitalists is basically if we can just tax people, like Jeff Bezos or Elon Musk, enough we can solve all of these problems and we don’t have to change anything, we can just continue-

This is actually a fallacious argument for a few reasons. First, because one of the problems with taxing people like Jeff Bezos and Elon Musk is the fact that after we collect that money from them, who’s going to redistribute it? Well, it’s going to be us. The idea that we’re going to take this money and redistribute it evenly to the people who need it without taking anything for ourselves or our own communities, our own families, or enriching ourselves in any way is nonsense.

And you can see this, actually, as I point in the book, when you look at the first, what I call The First Great ‘Awokening’, back in the 1920s, so shortly after the income tax was passed, and a lot of philanthropy as we understand it today started to be established, a lot of money was taken from the Gilded Age elites and redistributed.

But when you look at who was it redistributed to, the primary wealth transfer that occurred during that period was actually from the wealthy to the upper middle class. We took from the rich and we gave to ourselves. And that’s exactly the kind of thing that you might expect to happen again today.

In fact, if you look at the allocation of wealth in society, it’s definitely true that the top 1% own a radically disproportionate share of wealth, but they don’t own most of it. If you look at the top 20%, though, as someone like Reeves might suggest, the top 20% of Americans control 72% of all wealth in the country, leaving the other 80% of Americans to play with the other 28% of wealth. The idea that you can just tax the top 1% and leave the top 20% and somehow we can solve these problems, it’s just not true.

You can’t be an egalitarian social climber, you can’t be an egalitarian elite. These are two drives that we have, that we’re sincerely committed to, on both ends, but they’re in fundamental tension. And when push comes to shove, when we have to choose between perpetuating or enhancing our own position or helping our children reproduce our own position or climb up the ladder further versus living our social justice ends when we’re put in a position where we have to choose and we do have to choose in many instances it’s the former drive that ends up dominating and that ends up subverting our pursuit of social justice driving us to taking these more symbolic gestures instead of doing things that we could do in our own lives, that could make a difference.

Sometimes when you read social justice oriented literature, the impression that you get is that the goal, the social vision for the world, would be to basically flip the existing hierarchies to, ‘that’s enough of white people oppressing people, let’s let black people oppress the whites for a while,’ or ‘let’s let the women oppress the men for a while,’ ‘let’s invert the status hierarchy,’ rather than taking the ladder down, let’s just change who’s at the bottom and who’s at the top.

And so oftentimes in progressive spaces, people actually seem indifferent or even seem to experience, frankly, a perverse joy in the suffering of less affluent white people and the suffering of … men. So when you talk, for instance, there’s a whole bunch of social data that people, like Richard Reeves and others, have compiled that by a lot of indexes, young men, especially less elite men, especially non white men are languishing in contemporary society. They’re growing increasingly likely to die deaths of despair. They’re being increasingly likely to move out and live independently, to get married, to be employed consistently. They’re underemployed or unemployed altogether at increasingly high rates. When you look at who is it that’s most likely to both commit violence, but also die of violence and who’s most likely to be incarcerated and so on and so forth, it’s men.

But in a lot of feminist spaces, if you want to talk about the plight of men, if you want to talk about the fact that in a lot of fields, men are dramatically and increasingly underrepresented in those fields, the modal response you’ll get is, ‘cry me a river. Ooh, boo hoo for the poor men.’ And the same is true for whites.

Like, most poor people in America are white. It’s true, statistically, that certain … non-white populations are, per capita, more likely to be poor than a white person is, per capita. But when you look at who are the poor people overall in America, a plurality of poor people in the United States are white. And so if you’re only concerned about poverty, when it affects certain groups, then you’re going to miss the bulk of poverty, the idea that poverty is bad when it’s Black poverty.

But when white people are poor, they deserve it for not taking advantage of their privilege, or they deserve it because they hold noxious views. They hold the wrong views or vote for the wrong people, holding that kind of a position is not actually an egalitarian position. It’s a call to change the hierarchy, to basically have the same kind of exploitation, the same kind of oppression, but just to change who’s the oppressor and who’s the oppressee.

And that’s not exactly social justice.

Woke Is Over. Is Anti-Woke About to Go the Same Way?
By Nia-Malika Henderson

NH: What about the shift of non-White voters toward Trump?

MA: That predates Trump and it’ll probably continue after it. It seems to be driven more by voters’ alienation from the Democratic Party than it does with the unique characteristics or appeal of Donald Trump.

There are certain aspects of Trump’s policy positions that probably appeal to non-White voters in a way that progressives are often reticent to talk or think about. So for instance, if you look at who in the Democratic coalition was more supportive of, say, the Muslim ban when Trump was advocating it, well, that would be Black and Hispanic voters. Immigration, gender affirming care for minors, talking about sexuality in K-12 schools — who’s most skeptical of that? Religious minorities, ethnic minorities, folks who are more likely to subscribe to traditional views about gender and sexuality.

This is tough for progressives to reckon with because we tend to think all these social justice causes go together, and that there aren’t meaningful frictions or tradeoffs. But in fact, the people that we claim to be advocating for and representing, that’s not how they understand these issues at all.

Trump Changes Nothing
By Musa al-Gharbi

… there is abundant data showing that mainstream symbolic capitalists are out of touch with most other Americans on political and moral issues, have pretty bad instincts for what messaging will resonate with “normie” voters, and are quite resistant to recognizing just how extreme and alienating we actually are. However, even when political practitioners exercise discipline in identifying and refining messages that should play well with the masses, party activists and representatives regularly go “off script” and deliver political messaging that is persuasive and satisfying to them personally, with little regard for the available information on what might resonate with others.

We saw this play out in 2024, when Democrat-aligned polling consistently showed that the “democracy is on the ballot” messaging was their weakest strategy for moving voters. Nonetheless, the Harris campaign, her surrogates and supporters hammered this message home aggressively, at the expense of more effective strategies, because it was the message that was most satisfying for them to tell: they were brave heroes turning the country away from contemporary Nazism and towards a progressive and prosperous future. Sounds good, except the “Nazis” ended up winning, in no small part because the “good guys” couldn’t be bothered to focus on addressing the priorities and concerns of other people… whose votes are needed to win elections.

Consider the culture industries: movies, video games, TV shows, and so on. There are lots of data points suggesting that inserting ham-fisted progressive messaging and symbolism into these cultural products alienates audiences, leading the product to “bomb.” One might think professionals who produce these cultural outputs would aggressively work to avoid this. After all, bylines tied to highly-successful outputs can open lots of doors. However, reality does not conform with expectation here.

This is, first, because a byline from a major studio output “counts” for boosting everyone’s resumes even if the product is not successful. And it’s easy for any individual contributor to pass the blame — to tell stories about how their work was rock solid and the failure of the product was caused by “others” making bad decisions. As a result, it’s possible to chronically kill the products one touches while successfully painting yourself as a valuable contributor nonetheless (who has had the misfortune of getting saddled with bad teams who messed up high-potential big studio projects against your alleged protestations). Put simply: there is little career accountability for those who insert financially destructive moral and political content into works. There’s no real “skin in the game.”

And so, for many, if they had to choose between a non-“woke” blockbuster versus a product that aggressively pushes progressive politics but fails commercially – they’d choose the latter, no contest. The people whose opinions they care about are demonstrably not those of prospective consumers (who, if they don’t “get it,” f*** them); they’re far more worried about distinguishing themselves among their peers. The commercial success of the product is somewhat beside the point. The same holds for the products’ concrete political impact, for that matter.

We can see this latter point clearly because as politics, the messaging and symbolism inserted into these products is usually self-defeating: if you want a cultural work to persuade, you must get people to consume your content who don’t already agree with you. In a world where the cultural product is so overtly skewed that it only ends up getting consumed by a small and niche subset of society who are already fully on-board, then the work is politically and culturally sterile. It’s not “radical,” it’s pointless. Those genuinely committed to “moving the meter” politically would be better served by adopting a more subtle and moderate approach to progressive messaging and symbolism to ensure that the people who “need” the message actually consume it (instead of trying to maximize on moral and political purity at the expense of losing audience).

But again, such practical concerns are often lost on mainstream symbolic capitalists, many of whom would rather be “correct” and “pure” while losing than make any kind of moral or political compromise and succeed. And we have the luxury to adopt this posture because we’re largely insulated from the actual consequences of politics and economics.

In periods when symbolic capitalists, our institutions and our outputs grow especially political, the main thing we actually achieve is backlash against “our” causes, organizations and institutions – often driven by the very people we view ourselves as advocates for and representatives of (in the 2024 election, for instance, GOP gains were driven by less affluent and less educated people, religious minorities, young people, women and non-whites, even as highly-educated and affluent whites went the other direction).

One might think the core “trust the science” constituency would defer to the abundant evidence that symbolic capitalists’ attempts at influencing politics through their work rarely produce the desired outcomes and, perhaps, they would refrain from distorting their work into (likely counterproductive) moral, political and cultural “interventions.” But alas, that is not the world we live in.

Instead, the first time Trump was elected, large numbers of scientists, engaging as scientists, took part in “March for Science” demonstrations which implicitly (and sometimes explicitly) positioned “science” in opposition to being a conservative, Republican or Trump supporter. As political scientist Matt Motta later showed, these demonstrations did nothing to undermine support for Trump. Their main effect was to reduce trust in scientists.

This result should not have been surprising: the evidence is overwhelming that people don’t like politicized science. Even when constituents agree with the message, scientists engaging in politics tends to lower public trust. Citizens want scientists to help illuminate issues of public concern – but by acting as “honest brokers” providing information to help contextualize or evaluate competing claims and considerations, thereby empowering people to make their own decisions (rather than serving as “epistocrats” dictating to people what they should think or feel and how they should behave).

Scientists ignoring these preferences and engaging in indulgent #Resistance activities after the 2016 election proved highly consequential. The March for Science ended up polarizing trust in expertise right before the onset of a major global pandemic, when trust in science suddenly became extraordinarily important. The people who grew most alienated from scientists in the leadup to the pandemic were the same people who ended up shifting towards the GOP in subsequent elections: religious and ethnic minorities, young people, less urban, affluent or educated folks, and so on. These same populations had among the lowest vaccine uptake rates, and some of the highest rates of excess fatalities, over the course of the pandemic.

Put another way, there are people who are dead today who may have survived the pandemic if experts (and their cheerleaders) hadn’t engaged in alienating behaviors. Disproportionately found among the “excess dead” are the very people we often view ourselves as champions of: non-whites, religious minorities, the less materially “privileged,” and so on. However, precisely because it was mostly “those people” who ate the costs while most of “us” were personally insulated from the downsides of the March for Science and parallel movements, very little was learned from our poorly-conceived activism.

Indeed, despite the blowback against scientists after their ill-calibrated response to the 2016 election, many scholars and scholarly institutions and organizations nevertheless doubled down on partisanship during the next presidential contest.

Nature, for instance, endorsed Joe Biden in the 2020 election. Research (admirably) published in the same journal unsurprisingly found that the main effect of this endorsement was reduced trust in the publication and its articles. Here a reader might think, “surely, given this empirical result in their own pages about their own policy, the science periodical decided to step back from overt politics downstream, right? #TrustTheScience.” If only. In reality, after the 2020 election, Nature publicly pledged to increase censorship in their publications based on moral and political considerations. And in 2024, they once-again explicitly encouraged readers to vote for the Democrat (because “the world needs a U.S. president who respects science”), once again joined by Scientific American and other peer publications.

None of these science publications seem to care one whit about what “the science” says about the impacts of their political activism. For many, it feels good to engage in this activism, and it endears them to mainstream symbolic capitalists, so they’re committed to doing it, “real world” consequences be damned. Other subscribe to what Nassim Nicholas Taleb calls “naive interventionism” — the belief that it’s important to “do something.” That “doing something” is better than standing idly by (even in the face of evidence that the “something” we want to do probably won’t work). However, in reality, there are many situations where refraining from action is the better path. For instance, it’s clearly better to “do nothing” than to to take actions that are actively counterproductive to your professed causes and harmful to the institutions you belong to and the constituents you want to support.

You Ask, I Answer: We Have Never Been Woke FAQ
By Musa al-Gharbi

In the current (fourth) Great Awokening, symbolic capitalists ran roughshod over their bosses for more than a decade. We sowed institutional chaos and unrest that often spilled out into the public causing massive PR problems. We pushed companies to regularly deliver cultural products that “bombed” and advertising campaigns that backfired because we insisted on ham-fistedly injecting our preferred moral and political messaging into everything we did – with devastating impacts for many companies’ bottom lines. We shoved many of those “with the power to set system-wide and institution-wide goals” off of their perches (including, memorably, ousting traditional capitalist “Papa John” from the pizza company that bears his name — for speech crimes!). And that’s just looking at the economic sphere!

In terms of politics, symbolic capitalists completely bent an entire political party to their whim, pulling the Democrats hard “left” on cultural issues after 2010 (while insisting the party continue advancing our idiosyncratic economic interests at the expense of normie voters), devastating the party’s ability to succeed in national races. What we’ve seen over the last fifteen years hasn’t been politicos instrumentally conforming with symbolic capitalists’ preferences only insofar as it was useful to them – we’ve seen politicians engaging in electorally-suicidal behaviors in a desperate (and futile) attempt to appease people like us.

… the type of people who get folded into the symbolic professions tend to be more risk averse and conformist, and less solidaristic, than most other people. But it’s also because we profit immensely from the work we do, right alongside the billionaires and multinational corporations we serve. We comply because we want to: we’re eager to get ahead, to ingratiate ourselves with people in power, etc. And we aren’t super honest about these realities — not to others, not even to ourselves.

We have no problem coordinating and flexing our power when it comes to things that we find personally gratifying and instrumentally useful. When it comes to blacking out Instagram squares, insisting that people explicitly acknowledge their privilege, casting ballots for Democratic candidates, etc. we don’t say things like, “individual sacrifices and behaviors don’t matter.” We say individual action is both important and necessary. We solve all sorts of coordination problems to coerce our preferred behaviors out of others including, again, our institutions, bosses, political representatives, and so on. But when it comes time to put actual “skin in the game” — when addressing a problem would require us to incur real risks or sacrifices, to change our own aspirations, lifestyles, and so on (or when it’s us who face scrutiny for causing or benefiting from social problems) — then all of a sudden we hear “drop in the bucket” and “coordination problem” arguments. In these cases, we tend to gravitate towards narratives that the only real “solution” for the problems under consideration would be to completely burn down and reinvent the system… anything less is worse than useless… and because “the revolution” isn’t happening anytime soon (certainly not a leftist revolution), it seems there’s nothing left to do except resign ourselves to enjoying our elite lives, albeit with occasional pangs of guilt and bouts of self-flagellation.

… many professional families make dual-earner strategies work by offloading “women’s work” to other women (other women watch kids, prepare meals, clean the house, care for the sick, elderly, pets, and so on). I stressed how two-professional households often pay these women subpar wages so they can profit from their own family’s second salary despite purchasing tons of domestic labor from these other women. I highlighted how they often target undocumented women for this labor because they can pay them far less than U.S. citizens, demand more out of them, and treat them worse (because these migrants don’t have the same legal recourse with respect to wages, working conditions, etc. — if they make a problem, they risk deportation, and their clients know this and exploit it). Critically, I stressed, for those families who rely on undocumented immigrants to fulfill domestic roles, there’s no law that says they must be paid less than a U.S. citizen would accept for the same work. That is a choice families make. They see someone who is desperate and vulnerable, they know they can get away with shafting them on wages, and so they do that, enabling them to maximize profits from their second salary despite purchasing tons of “women’s work” from others. And, critically, they often think of themselves as “feminists” while engaging in these behaviors, in virtue of the fact said exploitation allows wives (in heterosexual couples) to aggressively pursuing careers alongside their husbands.

If people are so committed to advancing their own careers, living in particular neighborhoods, engaging in particular modes of self-presentation, building their personal wealth, and trying to ensure that their children reproduce their own elite position – to the point where they consume a lot of domestic labor to enable their lifestyle but don’t have a lot of money left over to pay said workers a decent wage – they can’t also say they’re deeply committed to paying their workers more but they can’t. It’s a choice to prioritize one’s elite lifestyle and their family’s elite reproduction over taking care of the people who make your lifestyle possible. It’s a choice that people can make differently by adjusting their lifestyles and reallocating their resources.

Why sociologist Musa al-Gharbi says social justice elites value performance over progress
By Paige Sutherland and Meghna Chakrabarti

AL-GHARBI: … one of the things that’s interesting about symbolic capitalists, and about the professions that we belong to, is that we tend to have much higher pay than most other workers in America. We tend to have much more autonomy in how we structure our times and the way we pursue our goals. We tend to have a lot more social prestige.

And from the beginning of a lot of our professions, the way we’ve justified these things, we’ve said, you should give us these things. You should give us this pay, this prestige, this autonomy, and so on, not for our own sake. But because if you give us these things, it will be to the benefit of everyone in society, including and especially the marginalized and the disadvantaged. And a lot of our fields are explicitly defined in terms of altruism and serving the common good.

And so what you might expect, and what we pledged would happen, is that as more resources and influence and power were consolidated in our hands, in the hands of symbolic capitalists, what you might expect is that a lot of inequalities would be shrinking, other social problems would be ameliorated, we would have growing trust in institutions, because of all the great work that we’re doing, and so on and so forth.

Instead, over the last 50 years, as the global economy has shifted away from industry and towards the symbolic professions, what we’ve seen instead is actually growing inequalities, increasing institutional dysfunction, reduced trust in institutions, reduced trust in each other, growing affective polarization and so on.

… if you look at New York City, some of the highest, huge concentrations of wealth, some of the largest concentrations of millionaires and billionaires and so on, but you also have really high inequality, huge poverty rates, some of the most racially segregated school systems in the country.

It’s one of the places that folks are fleeing at the highest levels, because a lot of people who live there find it unaffordable or unsustainable to live there, and so on and so forth. And who should we blame for this? We can’t blame those darn Republicans. If you look at the city council of New York City, it’s overwhelmingly Democrats, the governor is a Democrat, the mayor is a Democrat, the state assembly is Democrats, two to one, almost all are.

If you look at our delegation to the federal Congress, both of our senators are Democrat, Democrats, our House representatives are Democrats, at more than two to one and so on and so forth. So we can’t actually plausibly explain any of these problems which are more pronounced in places like New York and other knowledge economy hubs, that Democrats control with nearly one-party rule.

Note to Democrats: It’s Time to Take Up Your Hammers
By Binyamin Appelbaum

I would prefer to live in a world where the recent news that more than 146,000 New York City schoolchildren experienced homelessness during the last school year was regarded as a crisis demanding immediate changes in public policy.

Homelessness is the most extreme manifestation of the nation’s housing crisis. America simply isn’t building enough housing, which has driven up prices, which has made it difficult for millions of households to keep up with monthly rent or mortgage payments. Every year, some of those people suffer at least a brief period of homelessness.

Popular anger about the high cost of housing, which is by far the largest expense for most American households, helped to fuel Mr. Trump’s comeback. He recorded his strongest gains compared with the 2020 election in the areas where living costs are highest, according to an analysis by the Economic Innovation Group, a nonpartisan think tank.

The results are more than a backlash against the party that happened to be in power. The animating principle of the Democratic Party is that government can improve the lives of the American people. The housing crisis is manifest proof that government is failing to do so. And it surely has not escaped the attention of the electorate that the crisis is most acute in New York City, Los Angeles and other places long governed by Democrats.

Housing remains America’s biggest supply chain problem
By Rana Foroohar

The country’s housing production hasn’t kept pace with household formation since the Great Financial Crisis of 2008, when the number of housing unit starts dropped off a cliff. Since then, demand has far outpaced supply, leaving the US millions of units short of what its population needs.

Part of this is about nimbyism, meaning the “not in my backyard” approach to housing policy at a local level. While plenty of Americans in big cities such as New York, Los Angeles or San Francisco would agree that there’s a need for more affordable housing, and indeed more housing in general, few prosperous homeowners (or even renters) would vote to locate such a project near them.

Studies have found that city politics around zoning tends to favour the opponents of plans rather than the developers. This is a key reason that housing remains constrained.

This problem is being further fuelled by an influx of migrants to sanctuary cities in the US, where shelter is in theory guaranteed but in practice is not available. There are also lingering issues with inflation on materials and labour since the pandemic. These have either deterred new home construction or simply made it unaffordable.

‘Stuck’ in place: Author traces America’s mobility crisis to a Modesto law enacted in 1885
By Marc Weingarten

Yoni Appelbaum kicks off “Stuck: How the Privileged and the Propertied Broke the Engine of American Opportunity,” his insightful book about our national housing crisis, with a personal story that will be all too familiar to any Angeleno trying to get ahead. Having settled nicely into a modest two-bedroom apartment in the formerly working-class neighborhood of Cambridgeport, Mass., with his wife and children, Appelbaum finds himself being financially squeezed by, well, just about everything. “Rent was costing us a third of our income each month, and it kept going up,” he writes. “An apartment with a third bedroom was beyond our reach.”

The cost of living is eating up salaries and savings across the country. Half of all renters spend 30% of their income on housing, the latest information from the U.S. Census Bureau shows, and a quarter spend 50% or more.

The author, a deputy executive editor of the Atlantic and former history lecturer at Harvard, skillfully blends zoning history with his own reportage, digging into the history of his apartment to find some answers. The building, a “three-decker” built a century ago, was constructed to suit the needs of New England’s industrial class. Now, it is inhabited by the 1%: “graduate students, doctors, architects, engineers.”

How did this come to pass? Appelbaum makes a compelling case for a “mobility crisis.” “Americans used to be able to choose where to live,” he writes, “but moving toward opportunity is now, largely, a privilege of the economic elite.” Where once we were a nation constantly on the move in search of a better life, forging new communities in the process, we now find ourselves priced out of urban centers and other traditional incubators of compensatory working life. Thanks in part to legislation that has choked off housing inventory, formerly working-class buildings like the one where Appelbaum resides are now out of reach for the working class.

The story of America is the story of migratory settlement, from the Puritans who broke from the Church of England and settled in Massachusetts in 1630 to the millions of European exiles in New York and other cities along the Eastern Seaboard by the early 20th century. According to Appelbaum, the traditional narrative of America has been turned upside down: A “nation of migrants” that once relocated in search of a better life is now staying put, victims of restrictive zoning laws and antigrowth regulation that has turned the country into a patchwork of exclusionary regions surrounded by low-income neighborhoods.

Racial zoning covenants first gained traction in Modesto a few decades after the Gold Rush inspired a mad migratory dash to the region. When Chinese immigrants who had provided laundry services for prospectors began to creep in from the outskirts into predominantly white districts, locals tried physical intimidation and other tactics to force them out. When that didn’t work, Modesto’s city fathers in 1885 enacted an ordinance to force laundry services into an area that was already known as Chinatown.

Racial zoning policy spread across the Midwest and became a cudgel to sweep away those considered undesirable. Apartment dwellings, considered synonymous with urban blight, were banned in favor of single-family homes, while mostly white suburbs were kept off-limits to Black Americans and other minorities. The great migratory experiment that had created so much richness in American life had been shut down.

Soaring wealth inequality has remade the map of American prosperity
By Tom Kemeny

The wealthiest cities in the U.S. are now almost seven times richer than the poorest regions, a disparity that has almost doubled since 1960. Meanwhile, especially in urban coastal areas, wealth has become highly concentrated in the hands of a few.

Experts define wealth as the difference between the value of a household’s assets – cash, real estate and stocks, for example – and its liabilities, including mortgages, student loans and credit card debt. Wealth is also called “net worth.”

The expansion of wealth inequality is a challenge to the American Dream: the notion that, with hard work, opportunity and prosperity are accessible to all.

Wealth enables choice and stability. Poorer households have more trouble providing the best nutrition and education for their children. Additionally, people growing up in lower-wealth households are less likely to spur innovation in a field or start successful new businesses. Wealth also profoundly affects one’s health, leaving the least wealthy in our society significantly more vulnerable to premature death and disability.

At the community level, the lack of wealth can make a major difference in how well cities work for their residents.

People who grow up in wealthier places can reap benefits that span generations. As a result of property taxes and philanthropy, wealthier communities have greater resources for schools, health care, transportation and other infrastructure.

Good schools are one benefit of wealthy communities that may improve social mobility even for children born into poverty, studies suggest.

The map for 2022 reveals major disparities in typical (median) net worth across communities. Many of the least wealthy locations are in poor neighborhoods in some of America’s biggest cities – for instance, parts of the Bronx and East Harlem in New York, and areas of Houston and Milwaukee. A typical household in the five poorest communities had assets worth about $18,000. Many households in these locations held more debt than assets. Other wealth-poor areas of the country included parts of Baton Rouge, Louisiana, and Cincinnati, Ohio.

The wealthiest communities today tend to be found in urban coastal areas.

Palo Alto, California, and Nassau County, New York, are two of the nation’s five wealthiest places. The top five areas had median household net worth of nearly $1.7 million. That’s almost 90 times wealthier than the poorest five places.

Multiple factors may explain the growing pooling of wealth. They include the rising concentration of high-paying jobs in major metro areas and the explosive growth in housing values in these high-performing cities.

Changing federal tax policies have also favored the affluent at the expense of regular Americans.

How California’s excesses inspired the ‘abundance’ craze
By Dustin Gardiner

Abundance reflects a uniquely 21st century California zeitgeist. Today the state’s political dynamics are shaped by deep frustration over Democratic leaders’ inability to build enough housing, provide clean streets, lower the cost of living and instill a sense of safety amid a drug addiction epidemic.

It is, like much in California, a debate that plays out among Democrats, who have had unrivaled control of state government for the past decade and a half and now control all of its major cities.

But the movement faces strong critics. On the left, the Abundance effort has been pilloried as coastal-effete liberal thinking that may address the concerns of young professionals in big cities but won’t help Democrats regain support among working-class voters elsewhere. On the right, Republicans are skeptical that Democrats can deliver on the movement’s promises when two powerful constituencies — labor unions and environmentalists — are reluctant to peel back hard-fought regulations.

Getting America Unstuck
By Yoni Applebaum

Almost all new construction in the United States now requires government approval, and anyone with sufficient time and resources and education can effectively veto that approval, or at least impose great expense and delay. The result is that in the very places that need it most desperately, housing has become prohibitively difficult to build. If the freedom to move was originally secured by allowing Americans to choose their own communities, then it has been undone by a series of legal and political changes that restored the sovereignty of local communities and allowed them again to select their own members.

As a result, housing has grown artificially scarce and prohibitively expensive. A fortunate few can still afford to move where they want. Most people, though, would have to pay so much more for housing in prospering cities that offered better jobs that relocation would leave them worse off overall. Americans aren’t moving anymore, because for so many moving threatens to cost more than it delivers.

More Americans have stopped finding new jobs. Switching jobs frequently when you’re young correlates with occupational and eco nomic mobility, but the share of people switching industries, occupations, and employers has fallen dramatically, particularly among younger workers; they’ve grown less likely to work for four or more employers by the time they’re thirty and more likely to work for just one or two. And more Americans are ending up worse off than their parents. In 1970, about eight out of every ten children turning twenty could expect to earn more than their parents did; by the turn of the century, that was true of only half, and the proportion is likely still falling.

Boomers Are Buying the Most Homes (Again)
By Samantha Latson

… First-time buyers are “facing limited inventory, housing affordability challenges, and having difficulty saving for a down payment,” said Brandi Snowden, director of member and consumer survey research at N.A.R.

Baby boomers, by and large, simply have more cash in the bank. While 51 percent of older boomers (ages 71 to 79) and 39 percent of younger boomers (ages 61 to 70) paid for their homes with cash in 2023-24, more than 90 percent of buyers under the age of 45 (all millennials and Gen Z-ers) relied on financing and family support, according to the report.

Also notable: multigenerational home buying is on the rise. The 2025 report showed that 17 percent of buyers purchased homes suitable for multigenerational living in order to cut costs, tend to aging parents, or house adult children. That was up from 14 percent a year ago.

The Meek Shall Inherit from Their Boomer Parents
By Oren Cass

One problem with rising income inequality is that it leads to even sharper increases in wealth inequality. From 1989 to 2019, for instance, the liquid net worth of the wealthiest 10% of American households increased by $30 trillion. For the bottom half of households, liquid net worth fell.

The unprecedented wealth amassed by Baby Boomers may not cause trouble sitting in their retirement accounts and home equity. But it is dramatically distorting markets as it subsidizes the lifestyles of select members of younger generations. And as the Boomers pass on themselves, the estimated $120 trillion that they will leave behind, just in the United States, is poised to remake the face of the economy and create a landscape of winners and losers uncorrelated with merit, effort, or output. In the United Kingdom, one-third of those born in the 1980s will receive an inheritance equal to at least ten years’ earnings for their peers.

Did the Boomers really produce $120 trillion more than they consumed? No. Assets that they held, stocks and real estate, appreciated in value. Others out there did not consume $120 trillion more than they produced, now standing ready to produce for the lucky inheritors.

The fundamental premise of a market economy, and certainly the basis for its claim to be a just mechanism for allocating resources, is that people deserve what they have because they earned it. “Earned” is an oddly abstract word—what we actually mean is that they produced something of equivalent value. You get out what you put in.

I get the bigger house because I have the higher paying job and saved more is imperfect, to be sure, but it is coherent, and establishes some useful incentives. I get the bigger house because my parents had the higher paying job falls flat.

Wealth and inheritance taxes are one option to pursue, though making them work well is a technical challenge. (As an aside, it is always fun to ask rich progressives what they think of wealth taxes. You would be amazed how quickly they discover endless reasons it just can’t work. I’ll never forget listening to one billionaire, who styled himself a real pitchfork populist, implore me to consider the plight of his friend, who had most of his wealth stored in fine art. What was he to do? Sell the art?! That’s unreasonable.)

Conservatives have become acutely aware in recent years of the havoc wreaked on society by massive pools of wealth handed down to unaccountable charitable foundations. We should probably attend equally to the havoc when the recipients are not non-profits but trust-fund kids.

Gen Z wants an inheritance. Good luck with that, say their boomer parents
By Daniel de Visé

Many young adults are pinning their hopes on the Great Wealth Transfer, a generational exchange of riches that could pass $90 trillion from baby boomers to their heirs over the next 20 years.

But many boomers have other plans.

Indeed, retirement experts say much of the Great Wealth Transfer may go to hospitals and long-term care facilities as boomers confront the perils of old age.

“Nobody knows when they are going to die, and the idea of running out of money is rightfully terrifying to most people,” said Jonathan Swanburg, a certified financial planner in Houston.

Swanburg said many of his clients don’t set out to leave specific sums to their children, because they fear they might need the money for themselves. In the end, though, there’s usually something left over for the heirs.

How Many Adult New Yorkers Are Secretly Subsidized by Their Parents?
By Madeline Leung Coleman

More than at any other time in New York’s history, parent money shapes our culture. It’s in the restaurants started by big-pharma heirs and the fashion line that descends from Flushing real estate. It’s in the magazines with no ads and the white-cube galleries run by 20-somethings. Unlike nepotism, which favors industry loyalty, the recipients of parent money can often choose their own adventure.

New York has always been stuffed with rich kids chasing the dream on Daddy’s dime. However, it didn’t always feel as if those were the only people who could live here — as if the whole city bent to the budgets of the secretly funded. It does feel that way now because we’re living through a catalytic overlap: Rent prices are shooting up, salaries are not, and boomers are preparing to die. While they are being called the wealthiest generation ever to have lived — the right-place, right-time winners of both the housing and stock markets — their children are stalling out, scraping their flatlining bank accounts to pay rents that, in Manhattan, now average over $5,000 a month. Studio apartments sell for more than half a million. Full-time day-care tuition can run you $40,000 a year. For a lot of New Yorkers who are Gen X on down, markers of adulthood like buying a home, starting a business, or having a kid have gone from aspiration to fantasy — unless they’re lucky enough to benefit from a process financial analysts are calling “the Great Wealth Transfer”: the $124 trillion in assets that, over the next two decades, older generations will hand down to charities and, most of all, to their heirs.

It sounds enormous, epochal, game changing. And it will be, but not for most of us — over 50 percent of that $124 trillion is in the hands of the top 2 percent.

Since one of the most common ways for parents to give money is through real estate, Realtors have a front-row seat to who’s getting how much. “Homeownership is sort of the capitalist report card, right?” says Corcoran agent Andrew Schwartz. “If you have made it to a level where you can purchase a home, especially here, then that’s the hallmark of success in our society. But when push comes to shove, someone who isn’t generationally wealthy probably cannot buy a property in New York City.” In seven years of working in the industry, Schwartz has yet to see a first-time homebuyer who can close on a property without their parents’ help. Not only are the prices massive — the average for a Brooklyn condo last year was $1.3 million; for a brownstone, $3.3 million — but the mortgage rates are too, meaning more people even considering buying are doing it in all cash.

It wasn’t always this way. Longtime real-estate agent Louise Phillips Forbes says her first transaction, in 1989, was a one-bedroom on the Upper West Side for $63,000 — the equivalent of about $157,000 today. (A similar unit in that building sold recently for $730,000.)

Nearly a quarter of New Yorkers live below the poverty line, double the national rate. Many, many more may not be considered poor but struggle to cover more than the basics; in New York, a $100,000 salary is considered middle class. Last year, liberal think tank the Urban Institute introduced a measurement called the True Cost of Economic Security, which factors in the specific cost of living in different areas around the country to calculate how many people are not just barely getting by but meeting an income threshold that will let them live sustainably — or, as the researchers put it, thrive. Over 60 percent of New Yorkers do not meet the threshold. Of families with children, it’s 72 percent. And they would need to be making a lot more to get there: On average, for New Yorkers, that would mean an additional $40,000 a year.

The Yuppification of Philadelphia
By Tom McGrath

… among the few losers in America’s great postwar period of power and prosperity had been the country’s urban areas. The population numbers told the sad tale. Between 1950 and 1980, Philadelphia’s population dropped from 2.0 million to 1.7 million; Chicago’s from 3.6 million to 3.0 million.

So why, now, was a small, elite slice of a new generation reversing the migration and moving back into cities? Cost was part of the answer. You could get a deal in the city — once‐stately old homes with great bones were selling for $30,000, sometimes even less. Even more powerful, though, was what living in the city said about your identity: You were cosmopolitan. You were sophisticated. You were not, above all, your conformist, suburban‐dwelling parents.

By the late ’70s the phenomenon of young professionals situating themselves in cities was widespread enough that it had earned a name: the “Back to the City” movement.

In 1977, the Parkman Center for Urban Affairs in Boston hosted a conference that brought scholars and policy wonks together with a group of young Boston professionals. (The Parkman folks also fanned out around the country to interview even more city-loving young professionals.)

What was perhaps most interesting about the group of young city dwellers was their own view of themselves. “I’d say we were more concerned about intellectual things,” a conference attendee who lived in Boston’s Back Bay said. “We want to have seen the latest films. We want to know what people are reading.”

Said a New Yorker: “I want a racially and socially mixed neighborhood….I don’t want to live with a collection of people who are just like I am professionally and socially.” Perhaps the most telling comment came from a young St. Louis woman, who said simply, “We’re more interesting.”

When Yuppies Ruled
By Louis Menand

Those “ambitious and educated” gentrifiers were the young urban professionals, the yuppies.

The demographic that the term was intended to pick out—professionals under forty living in cities—was fairly small in 1984, something like 1.2 million people.

When we think of American life in the nineteen-eighties, we think of the yuppie.

You can dispose of discretionary dollars in various ways. If you were a yuppie, you spent them on yourself. You consumed conspicuously. That’s what the yuppie-haters hated most about the yuppies. You bought things you didn’t need and paid extra for the brand: Sasson jeans, Frette linens, Cross pens, Rolex watches, Perrier water, Aprica strollers. Faux high-end imports emerged—Grey Poupon mustard (then owned by RJR Nabisco), Häagen-Dazs ice cream (invented in the Bronx).

The yuppie strode forth from the economic wreckage of the nineteen-seventies: two oil crises, mortgage rates at thirteen per cent, a huge loss of manufacturing jobs in major industries like steel and cars, a stock market in the doldrums. When the economy recovered, in the early nineteen-eighties, it was easy for people to feel rich without feeling guilty. They had seen what it was like to worry about money. Spending it felt liberating.

It was no mystery, even in 1984, that the various displays of wealth and conspicuous consumption that the young urban professional was made to stand for masked a much grimmer economic reality. The yuppies were, after all, just a tiny sliver of even the baby-boom population, most of whom were not dining at the Odeon or hanging out at Studio 54.

The Congressional Budget Office reckoned that, between 1977 and 1988, the years of the Reagan recovery, the bottom eighty per cent of American families experienced a drop in income. The income of the lowest decile fell by more than fourteen per cent, that of the second lowest by eight per cent, and so on. But incomes for the top decile rose by more than sixteen per cent; for the top five per cent of households, they rose by twenty-three per cent; and for the top one per cent they rose by almost fifty per cent. In 1973, the median household income was $26,884; in 1987, it was $25,986.

What Americans were seeing was the fracturing of the middle class. Effectively, the middle of the middle was dropping out, and a wealth-and-income gap was now growing between the top ten per cent—the upper middle class and the super-rich—and the rest of the population. The yuppies were on the far side of this divide. People hated them because they resented them. (Weirdly, then as now, they did not resent the super-rich.)

Where did those young professionals come from? The answer is: the university. They were products of the higher-education industry. In the nineteen-sixties, college enrollment doubled. More people were coming out of college than the workplace needed, and one response was to stay in school. Between 1971 and 1986, the number of bachelor’s degrees awarded increased by eighteen per cent. But the number of M.B.A.s awarded increased by eighty-five per cent, the number of M.D.s by ninety-two per cent, and the number of law degrees by a hundred and forty per cent. Those were the yuppies.

In the nineteen-seventies, the so-called college premium, the difference in average income between people with a college degree and people with only a high-school diploma, was thirteen per cent for men and twenty-one per cent for women. By 1993, the over-all premium was fifty-three per cent. Today, it’s about seventy-five per cent, reflecting the wage gap between workers in the knowledge economy and workers in the less résumé-intensive precincts of the service economy.

So what has actually changed? In the upper middle class today, life is the same as it was in the nineteen-eighties, only more so. Tastes are no different. San Pellegrino may be preferred to Perrier, Lululemon has superseded designer jeans, and so on. But urban professionals, young and not so young, have largely the same life styles that they had during the Reagan Administration. Gut renovations, boutique gyms, and destination dining have not disappeared.

How 1980s Yuppies Gave Us Donald Trump
By Tom McGrath

Of course, for all the hype yuppieness was receiving in those early months of 1984, it could have been yet another here‐today‐gone‐tomorrow media fad — the sociological equivalent of the Hula‐Hoop or Pet Rock. But then came the 1984 presidential campaign, and everything changed.

“Yuppies have become the strike force of the Hart campaign,” CBS News reporter Bob Simon said in a piece that aired nationally in late March. Simon used the story as an opportunity to introduce evening news viewers to what, precisely, a yuppie was — and to let a handful of yuppies explain what they saw in Hart. “We’re fairly sophisticated and educated and well‐read,” a young woman in Connecticut said, “and I think that’s who Gary Hart appeals to.”

In the New York Times, reporter Steven Roberts went even deeper in a piece headlined, “Hart Taps a Generation of Young Professionals.” Roberts noted specific voter outcomes — in Florida, Hart had won among young voters, college grads and those making $50,000 a year or more.

… in an editorial, the New York Times was announcing the dawn of a new era. “This truly is the Year of the Yuppies, the educated, computer literate, audiophile children of the Baby Boom,” the Times wrote. “By definition, not all baby boomers are Yuppies. But the Yuppies are numerous — 20 percent of the vote in New Hampshire, 10 percent in Illinois. And they possess atypical affluence and influence: These are the people who created the counterculture. They still listen to rock music, still wear wire‐rimmed glasses. Does their politics of the left also endure? Or does turning gray mean, as for other generations, turning right?”

The answer, the editorial continued, likely depended on the issue. Citing a recent survey, the paper said yuppies “strongly favor the equal rights amendment and freedom of choice on abortion, and oppose employment discrimination against homosexuals.” But on other issues, they were more conservative or more self‐absorbed. They were less concerned about unemployment than other age groups, and more inclined to favor further cuts in federal spending. As for social welfare issues, they were less likely than older Democrats to support income maintenance programs.

As the campaign progressed, it became clearer that many of the baby boomers who’d been so excited by Hart’s fresh vision were ready to vote for Reagan. In a poll of voters between 18 and 34 who made more than $25,000 per year, Reagan held a 24‐point lead in a head‐to‐head matchup with Mondale.

For some young professionals, their support was based on Reagan’s manner and leadership style. But equally important was Reagan’s handling of the economy. College-educated young professionals had done better than most over the last four years, and seven in 10 of them believed Reagan was more likely than Mondale to keep making them better off financially.

Forty years after the fact, the election of 1984 stands as a clear turning point in America, particularly for Democrats. The enormity of Reagan’s landslide was scarring for the party, convincing a younger generation of leaders in particular that the party’s profile — as the home of working people, labor unions and trade protectionism — was no longer a recipe for electoral success. If they wanted to thrive, they argued, they needed to go harder in the direction that Gary Hart — and yuppies — had pointed them.

In 1992, that faction of the party got its wish with the nomination and election of Bill Clinton, not only a centrist but a Yale- and Oxford-educated baby boomer — the first yuppie president. In office, Clinton pursued an agenda that largely put the desires of college-educated professionals above those of the blue-collar working class. He signed welfare reform and announced the era of big government was over. He championed NAFTA, which made it easier to ship manufacturing jobs to Mexico. He deregulated the financial industry, boosting the power and profits of Wall Street.

Meanwhile, Democrats increasingly became the party of college graduates. In the late 1990s, fewer than 25 percent of Democrats held a college degree, compared with 30 percent of Republicans. But by 2010 the share of college-educated Democrats had risen to nearly 35 percent, and by 2020 it was nearly 50 percent. In contrast, the share of college graduates in the GOP barely budged, and today still hovers around 30 percent.

Though they were a minority in the country, the well-educated baby boomers who had come to the fore in the first half of the ’80s effectively became America’s ruling class. Their basic political philosophy — liberal on social issues, conservative on economic ones — dominated for decades, with support for gay marriage and abortion rights growing at the same time that taxes continued to be cut and globalization increased. More and more this well‐off professional class lived among themselves. In 2012, a researcher identified several hundred “super zip codes,” some within cities, most just outside of them, that attracted an extraordinary number of well‐educated, affluent families.

As for the rest of America? Their eyerolling over yuppies in the mid-’80s hardened into a deeper resentment of what became known as “the elites,” and in many respects it was understandable. By 2016, families at the top of the economic pyramid controlled 79 percent of all wealth in America, up from 60 percent in the 1980s. The percentage of wealth owned by the middle class dropped from 32 percent to 17 percent.

Can the 1980s Explain 2024?
By Nicholas Lemann

The emergence of the yuppie as a social type was notable, because it went against two previous prevailing pop culture narratives: first, that the Baby Boom generation was centrally defined by idealism, even protest (yuppies were materialistic and drawn to business careers); and second, that affluent, educated people, as they entered their thirties, would move to the suburbs and start families (yuppies gentrified city neighborhoods, went to restaurants and fitness clubs, and put off marriage).

For liberals, what might be most disturbing about the politics of the moment is that they are losing the loyalty and trust of the Americans they think of as their natural allies, those who are not prospering. The Republican Party has been successfully wooing voters in left-behind places, voters who didn’t go to college, voters with precarious jobs, even (starting from a low baseline) minority voters.

Trump voters said they were angry about the economy – many of them had a point
By Don Leonard

The consumer price index for all urban consumers is the measure of inflation that the Bureau of Labor Statistics uses to calculate real incomes. To arrive at this figure, the bureau averages the prices for a basket of goods and services. It then assigns weights to individual items based on their relative importance in terms of what average American consumers spend on things like food, housing and medical care.

To understand why this method of averaging can skew real income and inflation data to reflect the economic realities of wealthier households, consider what’s going on with housing, the biggest expense for most Americans.

The Bureau of Labor Statistics presumes that housing accounts for 36.5% of all expenditures for the average American household. That leaves 63.5% of their purchasing power left to cover the costs of other goods and services.

One of the assumptions behind the consumer price index is that households spend 8% of their income on health care. But all Americans pay far more than that, according to a 2020 Rand Corporation study.

Middle-income people spend around 21%, the lowest-earning households spend 34%, and the highest-earning U.S. households spend 16% of their income on medical services, Rand found.

Virtually everyone spends money on housing and health care. But the consumer price index also takes into account items that not everyone has to spend money on at a given point in time.

For example, the index assumes that American households spend, on average, only 0.7% of household income on child care or preschool each year. For families with infants or toddlers, the reality is much grimmer. One 2024 survey put the average cost of child care at 24% of household income.

According to the exit poll data, Kamala Harris won among families who made less than $30,000 in 2023 and those who made more than $100,000. By comparison, Trump won among families who earned between $30,000 and $99,999 — too much to qualify for government assistance, but – in many cases – not enough to get by.

America Has Never Been Wealthier. Here’s Why It Doesn’t Feel That Way.
By Talmon Joseph Smith

The share of wealth held by families in the top 10 percent has reached 69 percent, while the share held by families in the bottom 50 percent is only 3 percent, according to the latest reading from the nonpartisan Congressional Budget Office. (When future income claims from Social Security benefits are included, the bottom 50 percent hold 6 percent of total wealth.)

And while wealth has risen for the less wealthy half of the population in recent years, much of the uptick has been locked up in what financial analysts call “illiquid assets” — gains in home prices and stock portfolios — which are not easily translated into cash to pay for bills and expenses that are much higher than they were a few years ago.

Over the past four years, the University of Michigan’s monthly survey of consumer sentiment has shown those in the bottom two-thirds of income to be deeply pessimistic about the economy — with rock-bottom ratings more common during periods of deep recession, including the 2008 financial crisis.

In contrast, sentiment among the top third of earners recently rebounded after falling from prepandemic levels.

“Higher-income people drive most of aggregate spending,” said Joanne Hsu, an economist and director of the Michigan survey. “They were on an upward surge of sentiment between 2022 and 2024, and that’s consistent with their strong spending.”

Part of the disconnect may stem from the tendency among economists to track income progress primarily through percentage change rather than dollar amounts.

Even when inflation was peaking around 9 percent and diluting income growth, Ms. Hsu explained, “a 10 percent boost to middle and especially higher incomes is money that feels real, like you can do something with it.”

For someone making $100,000, that means a $10,000 raise. But a 10 percent increase at the bottom, perhaps to an hourly wage of $16.50 from $15, “means you’re still living hand-to-mouth,” she added.

In a recent report, Matt Bruenig, the president of the People’s Policy Project, a liberal think tank, evaluated the long-running question in U.S. economics of how many adults are living paycheck to paycheck — a term plagued, he said, by “inherent ambiguities.”

Drawing on data from the Survey of Household Economics and Decisionmaking, conducted annually by the Federal Reserve Board, Mr. Bruenig noted that “if we define someone as living paycheck to paycheck if they either say they do not have three months of emergency savings or say they cannot afford a $2,000 emergency expense,” then 59 percent of American adults are “living paycheck to paycheck.”

The homeownership rate for adults under 35, which peaked in 1980 at 50 percent, has fallen to 30 percent. Estimates from economists at the National Association of Home Builders in 2024 indicated that about half of American households could not afford a $250,000 home and that a large majority could not afford a median-priced home, now $419,000.

Belief in Mr. Trump’s ability to steer the economy played a key part in his election victory. And he promised to lower prices and ease the cost of living upon entering office. But public approval of his handling of the economy is only 39 percent, with just 32 percent of respondents approving of his approach to the cost of living, according to Reuters/Ipsos polling.

Owen Davis, a labor economist and research fellow at the Siegel Family Endowment — a nonprofit that funds education and work force research — believes that questions of economic dissatisfaction and the constant deliberation in recent years over whether the U.S. economy is, or isn’t, heading into a recession “often get lumped together” in unhelpful ways.

“We need to be able to have two different conversations about the economy,” Mr. Davis argues — one about the overall size, steadiness and direction of “the ship,” and another about its quality.

“We need to be able to distinguish between the question of whether the ship is sinking,” he said, “and the question of whether the accommodations on the ship are adequate.”

The Wealthiest 10% of US Households Now Represent Nearly 50% of Consumer Spending
By Nicholas Morine

… The wealthiest 10% of American households now account for nearly half of all consumer spending, the highest number on record since at least 1989, as FOX Business reported.

Citing data from a Moody’s Analytics report authored by Mark Zandi, the news outlet outlined that the richest 10% of U.S. households — defined as making about $250,000 or greater — represented 49.7% of all consumer spending. That’s the highest figure on record since data collection surrounding this metric was first measured by Moody’s, according to Marketplace, which also pointed out that consumer spending is responsible for driving approximately 70% of United States GDP.

In the time frame ranging from September 2023 to September 2024, the highest earners increased their spending by 12%. In contrast, spending by both lower-income and middle-income American households declined during that same period.

Per The Wall Street Journal, about 30 years ago, the wealthiest 10% of American households were responsible for about 36% of U.S. consumer spending.

As a whole, the wealthiest American households have increased spending far beyond the rate of inflation — but the remainder of Americans haven’t, at least not to the same degree.

The bottom 80% of earners spent 25% more than they had four years prior, barely edging out price hikes of 21% over the same period of time. On the other hand, the top 10% of earners spent 58% more.

The Top 1% of Americans Have Taken $50 Trillion From the Bottom 90%—And That’s Made the U.S. Less Secure
By Nick Hanauer

According to a groundbreaking new working paper by Carter C. Price and Kathryn Edwards of the RAND Corporation, had the more equitable income distributions of the three decades following World War II (1945 through 1974) merely held steady, the aggregate annual income of Americans earning below the 90th percentile would have been $2.5 trillion higher in the year 2018 alone. That is an amount equal to nearly 12 percent of GDP—enough to more than double median income—enough to pay every single working American in the bottom nine deciles an additional $1,144 a month. Every month. Every single year.

As Price and Edwards explain, from 1947 through 1974, real incomes grew close to the rate of per capita economic growth across all income levels. That means that for three decades, those at the bottom and middle of the distribution saw their incomes grow at about the same rate as those at the top.

But around 1975, this extraordinary era of broadly shared prosperity came to an end. Since then, the wealthiest Americans, particularly those in the top 1 percent and 0.1 percent, have managed to capture an ever-larger share of our nation’s economic growth—in fact, almost all of it—their real incomes skyrocketing as the vast majority of Americans saw little if any gains.

… In 2018, the combined income of married households with two full-time workers was barely more than what the income of a single-earner household would have earned had inequality held constant. Two-income families are now working twice the hours to maintain a shrinking share of the pie, while struggling to pay housing, healthcare, education, childcare, and transportations costs that have grown at two to three times the rate of inflation.

The iron rule of market economies is that we all do better when we all do better: when workers have more money, businesses have more customers, and hire more workers. Seventy percent of our economy is dependent on consumer spending; the faster and broader real incomes grow, the stronger the demand for the products and services American businesses produce. This is the virtuous cycle through which workers and businesses prospered together in the decades immediately following World War II. But as wages stagnated after 1975, so too did consumer demand; and as demand slowed, so did the economy.

… this upward redistribution of income, wealth, and power wasn’t inevitable; it was a choice—a direct result of the trickle-down policies we chose to implement since 1975.

We chose to cut taxes on billionaires and to deregulate the financial industry. We chose to allow CEOs to manipulate share prices through stock buybacks, and to lavishly reward themselves with the proceeds. We chose to permit giant corporations, through mergers and acquisitions, to accumulate the vast monopoly power necessary to dictate both prices charged and wages paid. We chose to erode the minimum wage and the overtime threshold and the bargaining power of labor. For four decades, we chose to elect political leaders who put the material interests of the rich and powerful above those of the American people.

The Silicon Valley Elite Who Want to Build a City From Scratch
By Conor Dougherty and Erin Griffith

In California, housing has long been an intractable problem, and Silicon Valley’s moguls have long been frustrated with the Bay Area’s real estate shortage, and the difficulty of building in California generally, as their work forces have exploded. Companies like Google have clashed with cities like Palo Alto and Mountain View over expanding their headquarters, while their executives have funded pro-development politicians and the “Yes in my backyard” activists who have pushed for looser development and zoning laws in hopes of making it easier to build faster and taller.

The practical need for more space has at times morphed into lofty visions of building entire cities from scratch. Several years ago, Y Combinator, the start-up incubator, announced an initiative with dreams of turning empty land into a new economy and society. Years before that, Peter Thiel, the PayPal co-founder and billionaire Facebook investor, invested in the Seasteading Institute, an attempt to build a new society on lily pad-like structures in the law-and-tax-free open ocean.

Escape Therapy: On Douglas Rushkoff’s “Survival of the Richest”
By Raymond Craib

The opening pages of Survival of the Richest describe what could easily be a scene from a James Bond novel. Our protagonist — in this case, Rushkoff — is flying in business class to a distant airport. Provided with luxuries like noise-canceling headphones and warmed nuts, he will be met by a high-end limousine and ferried to a remote desert location. All of this is at the invitation and expense of an unnamed group of mysterious billionaires.

Despite the large honorarium these billionaires offer him, the exact reason they have invited him to their retreat is initially mysterious: as best as he can figure, he is to provide general insight on technology and its future. Once he finally meets them the next day, he is astonished to discover that they are looking for his guidance on how to survive what they refer to as “The Event” — “the environmental collapse, social unrest, nuclear explosion, solar storm, unstoppable virus, or malicious computer hack that takes everything down.” In their desire to protect themselves from the worst consequences of such scenarios, these individuals have begun to develop various lines of escape: bunkers, seasteads, plans for space colonies, and the like. The catch is that the apocalyptic futures they fear are largely a consequence of their own financial, ecological, technological, and ideological commitments. That they refuse to head off the End of Times by changing their behavior is the central irrationality embedded in what Rushkoff calls “The Mindset” (building on Richard Barbrook and Andy Cameron’s 1995 essay “The Californian Ideology”).

Billionaire preppers, he argues, are not interested in community.

So, what are they interested in? Well, for one thing, “aristocrat” luxury bunkers with bowling lanes, wine vaults, and swimming pools that go for a cool $8.3 million. And floating, private seasteads on the open ocean that carry a similarly hefty price tag. And a host of other schemes intended to protect them from all manner of grim realities. But these are ludicrous survival strategies. The microclimates are too fussy for long-term occupation, and no environment can be truly sealed off from the outside. Just as important, things break; replacement parts need to be manufactured, delivered, and installed; compounds need to be protected, and their guards kept loyal; and so forth. In other words, labor is an essential and obvious hiccup. The silliness of Rand’s fantasy of Galt’s Gulch was always the following: Miffed capitalists run off to a Rocky Mountain retreat? Awesome. Who needs them? The real labor that keeps things working will continue without them. The desperation with which the uber-capitalist class clings to a libertarian lexicon of “individuality” and “self-sovereignty” suggests that, at some level, they recognize their own parasitism. And indeed, it turns out that the billionaires who invited Rushkoff out to the desert want, more than anything, his approval of their libertarian philosophical positions.

An inclusive, distributed, and participatory future will not be the result of technology but of changing the social system. Technology may be a big deal, but the elephant in the room is billionaires’ wealth and outsized influence. It is the fact that they are billionaires that insulates them from the consequences of their own actions and compels them to obstruct and savage efforts to structurally address what ails us.

Democracy is a thorn in the side of Promethean world-builders, from the neoliberal ideologues battling the New Deal in the postwar era to the contemporary seasteaders and advocates of charter cities, from Rand and Friedman to Thiel and Musk. The messy process of consensus building, and of argument and compromise, is anathema to their self-interested idea of anti-politics. The refrains are predictable: Government is inefficient. Regulations inhibit creativity. Oversight is oppressive. Consensus and compromise take too much time. Politicians are corrupt and the plebes don’t know what is good for them. What is to be done? Trust the disrupters. Embrace greed (“effective altruism,” anyone?). Innovate. Unleash the entrepreneurial kraken! In this rendering, politics is a dead end, as captured by slogans like the Free Nation Foundation’s “Stop Complaining. Start Building,” or, more recently, the Startup Societies Network’s “Don’t Argue. Build.”

I will bracket for now a number of obvious issues that arise from enacting such slogans (they suggest that their advocates have never had to deal with zoning issues or are perfectly fine with not challenging a neighbor who wants to build a billboard in front of their bay window or a firing range in their backyard, or dump toxic waste in the garden, or cook meth in the basement, or … well, you get the point). Let me point instead to their insidiously dictatorial desire to evacuate debate and politics from the process of building.

The Mindset is not merely a state of mind; it has a materiality and material consequences. Given its proponents’ exploitative ethos and ability to wreck the planet, the Mindset needs to be defeated rather than ignored or, worse, tolerated. Do we really want to just look the other way when Elon Musk cries “Fuck Earth!” and shoots for Mars while tweeting positively about possible coups d’état because he wants to ensure the lithium extraction upon which his company relies? No doubt we must engage with one another in ways that mirror the world in which we want to live. But living as one might wish to live is a distant dream for much of the planet’s population, whose lives are — directly and indirectly, brutally and subtly — shaped daily by the activities of the billionaire bunker class.

Urban theorist and historian Mike Davis noted in an essay on climate change that we are not all in this together: a group of “first-class passengers” travels in style on our planet while a growing climate-lumpen fights it out in the heat of the approaching flames. Once it gets too hot, those elite passengers will jump ship — to underground bunkers, or to the ocean, or perhaps to orbital habitats, or most likely to heavily fortified private compounds. Having tried to escape death and taxes, they will next want to escape us and the conditions that they have helped create.

Doomsday Prep for the Super-Rich
By Evan Osnos

Max Levchin, a founder of PayPal and of Affirm, a lending startup, told me, “It’s one of the few things about Silicon Valley that I actively dislike—the sense that we are superior giants who move the needle and, even if it’s our own failure, must be spared.”

To Levchin, prepping for survival is a moral miscalculation; he prefers to “shut down party conversations” on the topic. “I typically ask people, ‘So you’re worried about the pitchforks. How much money have you donated to your local homeless shelter?’ This connects the most, in my mind, to the realities of the income gap. All the other forms of fear that people bring up are artificial.” In his view, this is the time to invest in solutions, not escape. “At the moment, we’re actually at a relatively benign point of the economy. When the economy heads south, you will have a bunch of people that are in really bad shape. What do we expect then?”

On the opposite side of the country, similar awkward conversations have been unfolding in some financial circles. Robert H. Dugger worked as a lobbyist for the financial industry before he became a partner at the global hedge fund Tudor Investment Corporation, in 1993. After seventeen years, he retired to focus on philanthropy and his investments. “Anyone who’s in this community knows people who are worried that America is heading toward something like the Russian Revolution,” he told me recently.

To manage that fear, Dugger said, he has seen two very different responses. “People know the only real answer is, Fix the problem,” he said. “It’s a reason most of them give a lot of money to good causes.” At the same time, though, they invest in the mechanics of escape. He recalled a dinner in New York City after 9/11 and the bursting of the dot-com bubble: “A group of centi-millionaires and a couple of billionaires were working through end-of-America scenarios and talking about what they’d do. Most said they’ll fire up their planes and take their families to Western ranches or homes in other countries.” One of the guests was skeptical, Dugger said. “He leaned forward and asked, ‘Are you taking your pilot’s family, too? And what about the maintenance guys? If revolutionaries are kicking in doors, how many of the people in your life will you have to take with you?’ The questioning continued. In the end, most agreed they couldn’t run.”

Élite anxiety cuts across political lines. Even financiers who supported Trump for President, hoping that he would cut taxes and regulations, have been unnerved at the ways his insurgent campaign seems to have hastened a collapse of respect for established institutions.

Robert A. Johnson sees his peers’ talk of fleeing as the symptom of a deeper crisis.

Johnson wishes that the wealthy would adopt a greater “spirit of stewardship,” an openness to policy change that could include, for instance, a more aggressive tax on inheritance. “Twenty-five hedge-fund managers make more money than all of the kindergarten teachers in America combined,” he said. “Being one of those twenty-five doesn’t feel good. I think they’ve developed a heightened sensitivity.” The gap is widening further. In December, the National Bureau of Economic Research published a new analysis, by the economists Thomas Piketty, Emmanuel Saez, and Gabriel Zucman, which found that half of American adults have been “completely shut off from economic growth since the 1970s.” Approximately a hundred and seventeen million people earn, on average, the same income that they did in 1980, while the typical income for the top one per cent has nearly tripled. That gap is comparable to the gap between average incomes in the U.S. and the Democratic Republic of Congo, the authors wrote.

Johnson said, “If we had a more equal distribution of income, and much more money and energy going into public school systems, parks and recreation, the arts, and health care, it could take an awful lot of sting out of society. We’ve largely dismantled those things.”

As public institutions deteriorate, élite anxiety has emerged as a gauge of our national predicament. “Why do people who are envied for being so powerful appear to be so afraid?” Johnson asked. “What does that really tell us about our system?” He added, “It’s a very odd thing. You’re basically seeing that the people who’ve been the best at reading the tea leaves—the ones with the most resources, because that’s how they made their money—are now the ones most preparing to pull the rip cord and jump out of the plane.”

Historically, our fascination with the End has flourished at moments of political insecurity and rapid technological change. “In the late nineteenth century, there were all sorts of utopian novels, and each was coupled with a dystopian novel,” Richard White, a historian at Stanford University, told me. Edward Bellamy’s “Looking Backward,” published in 1888, depicted a socialist paradise in the year 2000, and became a sensation, inspiring “Bellamy Clubs” around the country. Conversely, Jack London, in 1908, published “The Iron Heel,” imagining an America under a fascist oligarchy in which “nine-tenths of one per cent” hold “seventy per cent of the total wealth.”

At the time, Americans were marvelling at engineering advances—attendees at the 1893 World’s Fair, in Chicago, beheld new uses for electric light—but were also protesting low wages, poor working conditions, and corporate greed. “It was very much like today,” White said. “It was a sense that the political system had spun out of control, and was no longer able to deal with society. There was a huge inequity in wealth, a stirring of working classes. Life spans were getting shorter. There was a feeling that America’s advance had stopped, and the whole thing was going to break.”

Business titans grew uncomfortable. In 1889, Andrew Carnegie, who was on his way to being the richest man in the world, worth more than four billion in today’s dollars, wrote, with concern, about class tensions; he criticized the emergence of “rigid castes” living in “mutual ignorance” and “mutual distrust.” John D. Rockefeller, of Standard Oil, America’s first actual billionaire, felt a Christian duty to give back. “The novelty of being able to purchase anything one wants soon passes,” he wrote, in 1909, “because what people most seek cannot be bought with money.” Carnegie went on to fight illiteracy by creating nearly three thousand public libraries. Rockefeller founded the University of Chicago. According to Joel Fleishman, the author of “The Foundation,” a study of American philanthropy, both men dedicated themselves to “changing the systems that produced those ills in the first place.”

Why the billionaire class is kissing Trump’s proverbial ring
By Donald Earl Collins

Despite all beliefs to the contrary, the billionaires who have been seen in President Donald Trump’s orbit since he won the presidency for a second time last November are not mere sycophants to his regime.

It is true that paying fealty to Trump may make it seem like billionaires and trillion-dollar companies are caving in to a wannabe dictator. Yet it only seems that way. Zuckerberg’s recent call with shareholders is a case in point. In revealing his good news about market share gains and higher-than-expected profits from the end of 2024, Zuckerberg commended Trump for his support of companies like Meta. “We now have a US administration that is proud of our leading companies, prioritises American technology winning and that will defend our values and interests abroad … I am optimistic about the progress and innovation that this can unlock,” Zuckerberg said.

For the wealthy and massive corporations, protecting their bottom lines is the end goal, with Trump as their means to their end. If it takes a bent knee or effusive praise to gain more market share or more power over the federal government’s purse, then so be it.

The easiest way to see that the billionaire class is in fact using Trump to reset the priorities of the federal government, and not allowing Trump to dictate terms to them, is through looking at their relationship with Trump over the past year. Take the hundreds of millions multi-billionaires like Elon Musk, Miriam Adelson, and Linda McMahon poured into Trump’s 2024 presidential campaign. During the post-election transition, Musk, Altman, Vivek Ramaswamy, and Apple’s Tim Cook, along with trillion-dollar corporations like Amazon, Meta, Bank of America, and Goldman Sachs, also contributed millions of dollars to Trump’s $150m inauguration fund. In the past few months, Disney-ABC settled a defamation lawsuit filed by Trump for $15m, while Meta settled for $25m over banning Trump from Facebook and Instagram after the January 6 insurrection in 2021.

With such enormous sums of money involved, at the very least, the billionaire set has been paying Trump to land leading roles in his administration. Of course, there are other likely reasons for their behaviour, such as more access to government contracts for their businesses, or general access and influence over the president as he sets future economic and social policies domestically and abroad. Whatever their reasons, these capitalists and corporations are not giving Trump this money simply out of a sense of deference.

The Gilded Age Is Back — And That Should Worry Conservatives
By Joshua Zeitz

The Gilded Age was the era in the late 19th century when business and industry dominated American life as never before or since. It was a period of unprecedented economic growth and technological progress, but also of economic consolidation and growing wealth inequality. Titans of industry enjoyed enormous control over political institutions, while everyday Americans buckled under the strain of change.

The dawn of the Gilded Age was in large part a result of the Civil War, which accelerated America’s transformation from a country of small towns, shops and farms into an urban, industrial behemoth. The challenges of supporting 2 million soldiers and sailors dramatically increased industrial development.

As a consequence, from 1865 to 1873, industrial production increased by 75 percent, putting the United States ahead of every other nation save Britain in manufacturing output.

The resulting economic boom created unprecedented opportunities for graft. In 1872, the Crédit Mobilier scandal implicated 30 sitting members of Congress, as well as Vice President Schuyler Colfax, in an elaborate double-billing and securities fraud scheme associated with the construction of the Union Pacific Railroad line. Three years later, the Whiskey Ring affair exposed dozens of federal revenue collectors in a massive kickback scheme.

The boom proved lucrative to the small number of men who controlled access to local resources, but much less profitable for the hundreds of thousands of workers who supplied the muscle. Rarely did it occur to business and political elites that they had not prospered strictly on their own talent and merit alone. And they didn’t just owe it to workers; they also owed it to the government itself, which lavished moguls with subsidies and contracts. Tycoons like Leland Stanford and Collis Huntington benefited immensely from federal land grants and government-backed bonds that funded the construction of transcontinental railways. Industrialists and financiers like Carnegie and J.P. Morgan profited from lucrative war-time contracts supplying steel, arms and financial services to the Union government. Rockefeller, though often celebrated for his ruthless efficiency in building Standard Oil, also took advantage of favorable state and federal policies, including railroad rebates and tariffs that protected American oil interests.

The end result was a massive concentration of wealth in the hands of a relatively small number of companies. By the turn of the century, roughly 300 corporations controlled two-fifths of all U.S. manufacturing. And that concentration spilled over into politics.

The U.S. Senate was a perfect embodiment of this reality, with members often assigned monikers that reflected their economic allegiances, such as the “Senator from Standard Oil” or the “Senator from the Railroads.” Nelson Aldrich of Rhode Island, deeply connected to the Rockefeller empire by both marriage and business interests, championed policies that favored the oil industry and protective tariffs for industrial monopolies. Leland Stanford of California, a co-founder of the Central Pacific Railroad, used his Senate seat to advance railroad interests in the West, while Mark Hanna of Ohio, a wealthy industrialist and influential Republican political strategist, was instrumental in securing William McKinley’s rise to the presidency.

Beyond campaign financing, McKinley also benefited personally from the financial backing of these industrialists — Hanna himself helped bail McKinley out of personal debt before he entered the White House, in a striking conflict of interest. His presidency, in turn, reflected the interests of his benefactors, as he promoted pro-business policies, high tariffs and a gold standard that favored banking and industrial elites over working-class Americans. This fusion of wealth and politics signaled the cementing of corporate power in American governance at the turn of the 20th century.

The Economic Consequences of State Capture
By Elizabeth David-Barrett

In struggling democracies around the world, small cliques of politicians, business elites, and politicians with business interests—what political scientists call “poligarchs”—have warped the state to serve their interests. Together, these unholy alliances change rules, fire bureaucrats, silence critics, and then eat up the country’s resources. The politicians commandeer banks, rewrite regulations, and take control of procurement contracts. Their friends in the private sector, meanwhile, provide kickbacks, donations, and favorable media coverage.

There is a name for this process: state capture. It has occurred in Bangladesh, Hungary, South Africa, Sri Lanka, Turkey, and many other countries. Its exact economic effects can be difficult to quantify, and it often takes years before they fully manifest. But they are serious. In captured economies, the relationship between talent and success is severed. Skilled workers who lack the right political connections leave the country, and competent firms go under. Well-networked firms, meanwhile, grow fat without innovating or delivering quality products (or, sometimes, without delivering products at all). The country’s infrastructure deteriorates. Banks run out of money giving bad loans to favored businesses. The result is lower growth, fewer jobs, rising inequality, and high inflation.

In a healthy economy, companies compete on quality and price. But in captured ones, firms succeed by forging the right relationships, giving them little incentive to innovate or be efficient. Some of the best companies lose out simply because they lack the right networks. Would-be entrepreneurs don’t bother starting businesses. Many skilled workers leave the country in search of markets in which talent, not proximity to power, is rewarded. Favored companies, meanwhile, overcharge and underdeliver. Economic output, in turn, declines. Quality of life worsens.

But power-grabbing leaders push on regardless, and they usually succeed at avoiding accountability. State capture, after all, effectively ensures that the most powerful people in the country are the captors. They have the most money and control over the political apparatus.

Trump Showed Images of ‘Genocide’ in South Africa. One Was From the War in Congo.
By Lynsey Chutel and Monika Cvorak

In his meeting with President Cyril Ramaphosa of South Africa on Wednesday, President Trump claimed white South African farmers were victims of genocide and, to support that assertion, he held up an image that he said was from South Africa and which he claimed showed some of those victims being buried. The Reuters news agency said Friday that the photos were actually of the conflict in eastern Congo. That was not the only false claim he made.

During the encounter, Mr. Trump presented a stack of articles and blog posts as evidence of the persecution of white farmers in South Africa. He shuffled through them as Mr. Ramaphosa squinted at the pages, trying to see what they said.

Mr. Trump had aides dim the lights in the Oval Office and showed a video during the meeting that featured the booming voice of a rabble-rousing South African opposition politician known for controversial comments, Julius Malema.

Along with a montage collected from years of interviews, the video also showed Mr. Malema, who is Black, shouting an apartheid-era chant — “Kill the Boer!” “Kill the farmer!” — at a packed stadium rally for his party, the Economic Freedom Fighters, adding a sound meant to imitate a gun.

Groups that claim Afrikaners are the victims of violent persecution in South Africa have seized on Mr. Malema’s comments and songs, even as his own party’s popularity dims.

In South Africa, the English-language outlet News 24 traced the video Mr. Trump played to a social media account known for spreading misinformation. The investigative report found that Elon Musk had reposted the same video at least twice on X, the social media platform he owns.

A ‘rogue employee’ was behind Grok’s unprompted ‘white genocide’ mentions
By Liam Reilly and Hadas Gold

Elon Musk’s artificial intelligence company on Friday said a “rogue employee” was behind its chatbot’s unsolicited rants about “white genocide” in South Africa earlier this week.

The clarification comes less than 48 hours after Grok — the chatbot from Musk’s xAI that is available through his social media platform, X — began bombarding users with unfounded genocidal theories in response to queries about completely off-topic subjects.

In an X post, the company said the “unauthorized modification” in the extremely early morning hours Pacific time pushed the AI-imbued chatbot to “provide a specific response on a political topic” that violates xAI’s policies. The company did not identify the employee.

Musk, who owns xAI and currently serves as a top White House adviser, was born and raised in South Africa and has a history of arguing that a “white genocide” was committed in the nation. The billionaire media mogul has also claimed that white farmers in the country are being discriminated against under land reform policies that the South African government says are aimed at combating apartheid fallout.

Less than a week ago, the Trump administration allowed 59 white South Africans to enter the US as refugees, claiming they’d been discriminated against, while simultaneously also suspending all other refugee resettlement.

Elon Musk Is South African. We Shouldn’t Forget It.
By William Shoki

Mr. Musk is one of a number of reactionary figures with roots in Southern Africa who found an unlikely home in Silicon Valley and now wield disproportionate influence in shaping American and global right-wing politics. These men, such as Peter Thiel and David Sacks, emerged from a historical tradition that revered hierarchy and sought to sustain racial and economic dominance, only to find themselves in a world where that order was unraveling. Their politics reflect an instinct to preserve elite rule, cloaked in the language of meritocracy and market freedom, while channeling resentment toward new power structures they view as threats to their position.

How three brothers ‘captured’ a country
By David Hindley

The scandal of how three brothers looted at least $1 billion of public money in South Africa became front-page news when Jacob Zuma resigned as president in 2018.

After the end of apartheid in 1994, South Africans hoped their young democracy would never again be subverted by narrow interests. But this scandal became an abject lesson in the dangers of so-called “state capture”, or systematic and sophisticated corruption.

The three Gupta brothers grew up in Uttar Pradesh, north of Delhi, and were encouraged by their father to go out into the world. Ajay went to Russia, Rajesh to China and Atul, the middle son, ended up in Johannesburg in 1993.

He quickly persuaded his brothers to join him, and they began building a business empire that stretched from selling computers to mining, property and media.

Within a few years, the Guptas started making connections in the ruling African National Congress (ANC) party and were soon introduced to Zuma.

As they courted Zuma, the Guptas put several members of his family on the payroll, giving jobs to one of his daughters, one of his wives and, most significantly, to his son Duduzane. According to the official inquiry, the Guptas also paid for Duduzane’s wedding, his hotel stays, and made him a shareholder in several Gupta-linked companies.

Having built a relationship with Zuma, the Guptas proceeded to influence the appointment of the heads of key state-owned businesses such as Eskom, Africa’s largest electricity provider, and Transnet, the state railway company. They then diverted contracts worth $3.5 billion to companies they owned or were linked to, the inquiry found.

Two years later, the influence that the Guptas exerted on the government became very clear, with reports that they had engineered the removal of the finance minister, Nhlanhla Nene, who was replaced by an inexperienced MP.

The inquiry, headed by South Africa’s chief justice Raymond Zondo, began work in August 2018 and heard from more than 300 witnesses.

It released six reports from 2021 until June 2022, which together showed a “scarcely believable picture of rampant corruption”.

Here are some of the main points from the inquiry, as reported by the FT:

  1. In 2011 Zuma blocked the country’s intelligence services from investigating the Gupta brothers. The inquiry called this “if not the fundamental cause of state capture, certainly one of them.”
  2. At Eskom, the main supplier of electricity in South Africa, Zuma “readily opened the doors” for the Guptas to “help themselves to the money and assets of the people of South Africa.”
  3. Transnet, the country’s state rail operator, gave the Guptas irregular contracts that were valued at $2.7bn and was a “primary site” of looting.
  4. The restructuring of South Africa’s revenue service by management consultancy Bain & Co between 2012 and 2015 was “a clear example of how the private sector colluded” on state capture.

Is America a Kleptocracy?
By Jodi Vittori

While there is no universal definition of corruption, one of the most common, as defined by the advocacy group Transparency International, is “the abuse of entrusted power for private gain.”

There are a variety of flavors of corruption. Currently, the most concerning kind is grand corruption. Grand corruption is when public institutions are co-opted by networks of ruling elites to steal public resources for their own private gain. It involves a wide variety of activities including bribery, extortion, nepotism, favoritism, cronyism, judicial fraud, accounting fraud, electoral fraud, public service fraud, embezzlement, influence peddling, and conflicts of interest.

There is not one specific definition of kleptocracy beyond that of “rule by thieves.” As with grand corruption, a kleptocracy involves tightly integrated networks of elites in political, business, cultural, social, and criminal institutions engaging in bribery, extortion, and other destructive actions. But additional characteristics make kleptocracy stand out even above grand corruption.

First, the grand corruption in a kleptocracy is systemic, deeply networked, and self-reinforcing. Setting up a complex and highly lucrative corruption scheme is one thing, but transforming institutions to keep multiple streams of grand corruption through multiple networks ongoing for years or decades is a whole different level of kleptocratic wherewithal.

Second, the consequences of a kleptocracy will distort long-term political and socioeconomic outcomes. While grand corruption schemes may amass elites billions of dollars, if those occur in a large enough economy, they may not necessarily impact the average citizen much. In a kleptocracy, the distortions are so massive that average citizens cannot miss the impacts on their lives.

Third, in non-kleptocracies, grand corruption scandals may shock the conscience and grab headlines because they are not the norm. Such grand corruption in a kleptocracy is not an aberration but instead the unifying purpose and core function of the state. The scandals come so fast, so widespread, and so large that many citizens feel powerless to respond.

Key elites—referred to popularly as oligarchs—are instrumental in a kleptocracy. Oligarchy is derived from the ancient Greek words oligoi (“few”) and arkhein (“to rule”). Aristotle described oligarchy as “when men of property have the government in their hands.” Per Aristotle’s definition, to count as an oligarchy, the wealthy must be able to influence the government so to protect their wealth and power at the expense of the larger population.

Kleptocracy and kakistocracy in the 1990s Russia
By Branko Milanovic

What actually happened was the privatization of all government functions and the transfer of wealth to which over decades, if not a century, millions of people contributed, often having been paid miserable wages. That wealth created by millions and owned by the state was then frittered away or given to several hundred individuals, most of them with no merit except for the ability to manipulate others, to know how to run private militias, and to be free of any moral scruple. They thus became immensely wealthy.

The destructors’ approach was simple, efficient and apparently never imagined by the hundreds of creative and highly paid US and Western “advisers” to the Yeltsin government. Instead of privatization being used to increase enterprise efficiency as these ingenues believed, it was used by Berezovsky and others to destroy the enterprises. The first and the most important step was “privatization of profits”.

The approach consists in coopting the management of the target companies either through financial inducement (division of the loot) or through threats which, if not operative, led in many cases to reluctant managers suddenly drowning in rivers or jumping from the windows. Once the management is coopted, it intentionally makes decisions that go against the interest of the company and its workers.

But we shouldn’t forget the political enablers of kleptocracy. The key was Yeltsin (personally not corrupted) and his family and close environment (thoroughly corrupt). The origin of kleptocracy predates the famous loans-for-shares deal in 1995-96. It goes back to the last years of the Gorbachev reforms, but then accelerates under Yeltsin, fueled in large part by the hysterical fear of Communists’ return to power.

When electoral spinning was insufficient, stronger tactics were used. Yeltsin disbanded the Parliament when it began impeachment proceedings against him and eventually bombed the deputes out. I doubt The Washington Post today would support the same “democratic approach” if used by Trump. But it did for Yeltsin in 1993.

Britain’s second empire: how London became an oligarchs’ playground
By Kojo Koram

Britain’s reputation for harbouring offshore wealth began in the 1950s with some innovative thinking from a scrappy Birmingham bank called Midland. After the Wall Street crash of 1929 and the chaos of the Second World War, postwar politicians and economists agreed that it was important to control the cross-border flow of money more tightly. Capital controls limited the foreign ownership of assets and restricted banks’ trading in foreign currencies. The 1947 Exchange Control Act was passed in the UK, and with the empire in terminal decline, it appeared that the sun was setting on London’s time as the world’s financial centre.

However, Midland Bank realised that, in the divided world that was emerging in the early stages of the Cold War, a lot of people living in the eastern bloc, and therefore officially enemies of the US government, would nevertheless want to keep their money in US dollars, which had firmly replaced pound sterling as the world’s reserve currency. Midland quietly started letting Soviet bureaucrats and diplomats leave their secret dollars in British bank accounts. No one knows how long this was going on for, but in 1955 the Bank of England noticed that Midland Bank was holding large amounts of dollars, seemingly not for the purpose of specific transactions. This degree of transnational financial exposure was exactly the type of scenario that the restrictions on the flow of money were put in place to minimise.

The Bank of England was faced with a dilemma: ignore Midland’s actions and undermine the system of exchange controls, or discipline the bank but chase away a lucrative new source of foreign money and, perhaps more importantly, signal to the world that the curtains really were closing on the City of London’s spell as the centre of the international financial system. The Bank of England decided to go with the first option: it raised no objection to Midland’s holding large deposits of “offshore” US dollars, giving implicit permission to other British banks to do the same.

The Kleptocrat’s Sidekick
By Marie Le Conte

London, you may be aware, is the place to be if you’re a shadowy individual with piles of ill-gotten money who wishes to, ideally, shield said ill-gotten money from view.

… Oliver Bullough explained in Butler to the World how the United Kingdom, despite its professed adherence to the rule of law, became a key player in frustrating global anti-corruption efforts.

Bullough is an expert on the matter, and wrote the foreword for Indulging Kleptocracy: British Service Providers, Postcommunist Elites, and the Enabling of Corruption, which came out in the United States earlier this year. Far from walking on well-trodden ground, John Heathershaw, Tena Prelec, and Tom Mayne’s book goes in for a closer look, choosing to focus on the people applying the grease to corruption’s wheels.

We all know what the world’s oldest profession is, Bullough opens, and we can assume that the pimp came second. The third is what our three academics decided to study: namely, “the sidekick, the minion, the crony, and the enabler who is prepared to hold a victim’s arms back while the pimp punches them.” These roles may come with “little glory,” but they can be deeply lucrative.

The book could have begun with, say, the fall of the Soviet Union, but it instead starts much earlier, in the 16th century, as a certain Martin Luther hammers some theses to the door of a cathedral. The Reformation was partly caused by pushback to the Catholic Church’s increasing reliance on indulgences, which essentially allowed sinners to pay their way into heaven. Anything you’d done on this earthly realm could be swiftly forgotten with the right investments or charitable contributions.

Similarly, modern indulgences seek to “enrich the enablers, assuage the corrupt, and muddy the waters between right and wrong, truth and falsehood,” the authors write. Essentially, in exchange for some money, today’s wrong’uns can receive social status in Western democracies. These indulgences are granted by estate agents, lawyers, accountants, and wealth managers, among other professionals. What these people have in common is that they “facilitate transactions between two or more parties, at least one of whom has a source of wealth which is illicit.”

To study these enablers, the authors outline the different professional sectors that are “working to transform the world from one dominated by sovereign nation states … to one where public-private networks of elites dominate.” According to them, the nine ways that kleptocrats and their cronies do this are by hiding money, listing companies, selling rights, purchasing properties, explaining wealth, selling status, making friends, tracking enemies, and silencing critics.

Estate agent Benson Beard, for one, provides some light if acidic entertainment by telling a “client” (in fact an undercover anti-corruption campaigner filming a Channel 4 documentary), “Don’t talk to me about how [the money] comes here. … We have certain regulations within our industry where I don’t need to know where things come from.”

This quote offers a glimpse into the mindset of enablers and hints at why they have proliferated in Britain. One part of the answer lies in historical coincidence, whereby the Soviet Union was dismantled, with capital divided up between lucky elites, at the same time as the City of London continued to deregulate further and further as a way to rebuild an economy that could no longer rely on industrial production. Essentially, the British state found itself in need of money and amenable to turning a blind eye to where it came from, just as Slavic and Central Asian countries were suddenly headed by people with piles of money and no stable place to guard their wealth.

The lessons and implications of seizing Russian oligarchs’ assets
By Branko Milanovic

The first and the most obvious lesson that we can draw from the confiscation of Russian oligarchs’ assets is that the pre-February 24 Russia was not an oligarchy, as many believed, but an authoritarian autocracy. Instead of being ruled by a few rich people, it was ruled by one person. To draw this (rather obvious) conclusion, we need to go back to the initial rationale given for the threat of asset seizure. When US government spoke of the seizure of oligarchs’ assets, it was before the war and with the expectations that the oligarchs, faced with the prospect of losing most of their money, will exert pressure on Putin not to invade Ukraine. We can assume that 99%, or perhaps all, targeted oligarchs (and even those who feared to be possibly targeted) realized the stakes and must have been against the war. But their influence was, as we know, nil. Ironically, they were punished because they were not powerful.

If their influence on such an important matter, on which their entire assets and lifestyle depended, was nil, then the system was clearly not a plutocracy, but a dictatorship. I wrote about that in my July 2019 piece “Oligarchs and oligarchs” distinguishing between the early Russian billionaires who manipulated the political system (one should not forget that it was Berezovsky who brought Putin to Yeltsin’s attention because he thought that Putin could be easily controlled), and more recent billionaires who were treated as custodians of assets that the state may, by political decision, take from them at any point in time. It happened –unexpectedly—that it was not the Russian state that took their assets, but the American state. But it did so precisely because it thought (probably not accurately in all cases) that billionaires were “state oligarchs”.

I think that the future oligarchs (who are probably now making their first steps) will realize that they can either stick together or hang together. Under Yeltsin when they did dictate government’s policy, they preferred to fight each other, brought the country close to anarchy and even the civil war, and by doing so facilitated the rise of Putin who introduced some order.

The new post-Putin billionaires will probably not forget that lesson: so we may expect them to favor a weak central government, that is, a true oligarchy, and to insist on the domestic rule of law—just because they will have no longer any place where to move their wealth.

What Trump Is Doing With Crypto Should Worry Us All
By John Reed Stark and Lee Reiners

For 16 years, crypto enthusiasts have promised a “fourth Industrial Revolution,” pledging that crypto technology would transform the planet by democratizing wealth. Yet while other digital payment systems backed by established financial institutions, like Apple Pay, have flourished, cryptocurrency has yet to prove that it has any practical and legitimate utility.

Instead, what cryptocurrency has given our world is a shield that facilitates crime, from sex trafficking to ransomware attacks, drug dealing to child pornography. North Korea has become a crypto superpower, stealing over $6 billion worth of crypto through hacking over the past decade. By using unregulated offshore exchanges to convert the stolen crypto into cash, North Korea has funded its nuclear weapons program and shored up its sanctions-ravaged economy.

It was only a couple of years ago that the collapse of a leading crypto exchange, FTX, amid financial mismanagement and fraud undermined investor and public trust in the crypto industry. And it was only 17 months ago that Binance, another large crypto exchange, pleaded guilty to money-laundering violations, as terrorist financing, hacking and drug trafficking proliferated on its platform.

That was before the second Trump term. The S.E.C. suspended its civil fraud case against Binance in February. Company executives have met with Treasury Department officials to discuss loosening government oversight, The Wall Street Journal reported, while Binance has been exploring a deal to list a new cryptocurrency from a venture backed by Mr. Trump’s family.

Crypto is also making more inroads into the world of traditional finance. Last month, federal regulators reversed a policy that required banks to obtain approval before offering crypto-related products and services. And both the House and Senate are debating bills that would provide a new regulatory framework for stablecoins, a type of crypto intended to maintain a stable value and allow for easier trading of different crypto currencies, with the aim of further integrating them into the banking system.

This state of affairs brings to mind a similar moment in our history — the 1920s, when insider trading, market manipulation and lack of transparency destroyed public confidence in the system and helped set off the stock market crash that in turn played a part in the Great Depression.

Contributor: How America’s mistrust of institutions birthed the false promises of the crypto craze
By Rebecca Ackermann and Poppy Alexander

Trust in government has hit record lows, and financial independence feels beyond the reach of most. While millennials earn more than previous generations, they own only 5% of the wealth in the U.S., and Gen Z owns less. Home prices are still far outpacing the rise in wages, and the pandemic has only stratified wealth further. Efforts at wealth-building in this environment are becoming more scattershot and proudly anti-establishment. Companies whose stock goes viral become meme-stocks, their prices soaring with social media popularity rather than the underlying financials.

The expert class is failing, and so is Biden’s presidency
By Nate Silver

Begin with the response to September 11 — the Afghanistan and Iraq Wars, which were supported by bipartisan majorities. Then the financial crisis and the bank bailouts. Then Brexit and the election of Trump. Then the pandemic: what was supposed to be a triumph of management for a technocratic elite instead wound up as a worst-of-all-worlds scenario with prolonged restrictions and school closures and 7 million dead — from a virus possibly caused by sloppy scientific research practices. Then massive inflation, which was supposed to be a thing of the past. Throw in here, if you like, “wokeness” and how it’s eroded trust in higher education and triggered a cultural backlash.

Of course, the experts have gotten their comeuppance.

People sometimes ask me why I’m often harsher on the left than the right, despite agreeing with the expert class on more things than not. (I’d note here that I’m using the term “left” loosely: it’s not the traditional labor/Bernie Sanders left that I usually feud with, but rather two other groups: Democratic partisans and illiberal academic types.

As I discuss in my book, I instead see myself as straddling two communities: the expert class of academics, journalists and like-minded types that I call “the Village”, and the calculated risk-taking class — tech and finance types, poker players and crypto geeks — that I call “the River.”

There’s been a pullback from peak wokeness, certainly, and even institutions of higher learning are finally getting the message by doing things like restoring standardized testing requirements. The Democratic Party did eventually get rid of Biden to at least give itself a shot.

However, there has been an arc toward institutional decline. The failures of Biden’s presidency were not due to bad luck or “misinformation” among the broader electorate but rather were failures of its own making: overstimulating the economy, relaxing border controls amid a massive public backlash to immigration, and then trying to run Biden again. Plus, inefficient and sometimes corrupt governance in blue cities and states, which have steadily become less livable.

I have a thesis about why these institutions have been performing more and more poorly. For roughly the past 20 years, Democrats have become increasingly the party of the educated — and as you can see in the above data, Trump’s election in 2016 really locked in the trend. The result is what I call the Indigo Blob: the merger between formerly nonpartisan institutions like the media, academia, and public health on the one hand — institutions that draw almost exclusively from the ranks of college graduates — and expressly partisan and political instruments of the Democratic Party and progressive advocacy groups on the other hand.

The Indigo Blob has control of the “means of moral production”: it writes the story, at least as supposedly respectable people are expected to read it. It runs the newspapers and writes the Hollywood scripts. It awards the BAs, MDs and PhDs. And it seeks to shut out and shut up dissenters — it can be absolutely vicious toward people like me and Yglesias. In this later capacity, it is increasingly failing: people like Matt and me have found a huge audience through Substack without compromising our values or getting too “red-pilled” (or at least I’d like to think). Still, the Indigo Blob is eagerly flocking away from Twitter toward Bluesky, where it hopes to build a new echo chamber.

The Indigo Blob can weave superficially compelling narratives, often involving a lot of whataboutism. Biden pardoned Hunter? Well, what about Trump pardoning Paul Manafort? Those school closures were bad? Well, what about anti-vaxxers? Not on board with full-blown wokeness? Well, then you’re in league with the fascists. But these stories have become increasingly desperate and implausible. The Indigo Blob suggested that it was “ageist” to be concerned about Biden wanting to be president until he was 86. It said that educated white men brought about Trump’s victory, even though college-educated whites were actually the only group of men who didn’t swing heavily MAGA.

Both the multiethnic working class and an increasing number of highly successful people like those in the River are seeing through the bullshit.

Krugman vs. Krugman
By Michael Lind

The union-weakening, wage-depressing, native-displacing effects of mass unskilled immigration have been well-documented in the case of janitors, construction workers, and meatpackers.

In the case of meatpacking, industry experts (Krugman is not one) acknowledge that immigration has enabled employers to pay low wages, as an alternative to raising wages and benefits to attract citizen-workers. The authors of a 2022 study in the Journal of the Agricultural and Applied Economics Association conclude: “The results indicate that higher wages along with additional nonwage benefits would have expanded the labor supply”—in the absence of expanded immigration.

Between 1980 and 2010, chiefly as a result of the massive expansion of the H-1B program, the number of American computer science jobs held by foreign-born workers exploded from 7.1% to 27.8%. In 2021, 74.1% of the 407,071 H-1B visas issued to specialty foreign workers by the U.S. went to nationals from India. The overwhelming share of young Indian men among H-1Bs reflects not any extraordinary skills that they alone possess but rather their willingness to work for lower wages and benefits than their American counterparts, as well as the accidental importance of Indian labor contractors or “body shops” as suppliers of indentured servants to U.S. companies beginning in the 1990s.

As Daniel Costa and Ron Hira point out in a 2020 study, the Department of Labor sets the two lowest wage levels for H-1Bs well below the local median wage. “Not surprisingly,” Costa and Hira write, “three-fifths of all H-1B jobs were certified at the two lowest prevailing wage levels in 2019.”

This finding bears some attention. If H-1Bs are all geniuses with unique and valuable skills that both American workers and immigrants with green cards lack, then why are tech firms and their contractors so determined to pay most of their H-1Bs the very lowest wages permissible under U.S. law? Costa and Hira point to corporate savings on wages: “Wage-level data make clear that most H-1B employers—but especially the biggest users, by nature of the sheer volume of workers they employ—are taking advantage of a flawed H-1B prevailing wage rule to underpay their workers relative to market wage standards, resulting in major savings in labor costs for companies that use the H-1B.”

… the H-1B program has nothing to do with any lack of skills among American workers. Rather it is an example of labor arbitrage by employers who prefer to employ nonunion foreign indentured servants without voting rights and many legal rights over Americans who would demand higher wages and better treatment. Disney and other companies have even forced their American employees to train the H-1Bs brought in to replace them. If H-1B guest workers have unique skills that American workers lack, why do they need to be trained for their jobs by the American workers they are replacing?

Just as Republicans favored wage-suppressing mass immigration when they were the party of the affluent, college-educated overclass, today’s elitist Democrats now favor a never-ending stream of immigrant workers with little or no bargaining power for their constituents—like Silicon Valley donors whose firms depend on exploiting H-1B indentured servants, and urban professionals whose two-income lifestyle depends on a bountiful supply of cheap nannies, maids, restaurant workers, and Uber drivers.

In an Age of Right-Wing Populism, Why Are Denmark’s Liberals Winning?
By David Leonhardt

Immigration is likely to remain a defining political issue in coming years because poverty, political instability, climate change, trafficking networks and social media will continue to push residents of poor countries toward richer ones. Yes, those richer countries, where birthrates have plummeted, will need to admit immigrants to keep their economies functioning smoothly. But the approach that the United States and Western Europe have taken in recent decades has failed.

Immigration has often been chaotic, extralegal and more rapid than voters want. The citizens of Europe, the United States and other countries were never directly asked whether they wanted to admit millions more people, and they probably would have said no if the question had appeared on a ballot. Instead, they revolted after the fact. Trump won in 2016 and 2024 partly by running on a platform of mass deportation. In Europe, the parties of the far right were long the only opponents of immigration, and they have been rewarded with large gains.

Rapid immigration can strain schools, social services, welfare programs and the housing market, especially in the working-class communities where immigrants usually settle (as happened in Chicago, Denver, El Paso, New York and elsewhere over the past four years). Many studies find a modestly negative effect on wages for people who already live in a country, falling mostly on low-income workers. A 2017 report by the National Academies of Sciences, Engineering and Medicine, intended as a comprehensive analysis of the economic effects of immigration, contains a table listing rigorous academic studies that estimate immigration’s effects on native wages; 18 of the 22 results are negative.

A healthy political debate over immigration would have grappled with its complexities. It would have acknowledged that immigration increases G.D.P. in unequal ways, with the affluent enjoying more of the advantages, while poor and working-class people, including recent immigrants, bear more of the costs. Angus Deaton, a Princeton economist, Nobel laureate and immigrant from Britain, points out that some large sectors where many immigrants work provide services that wealthy people disproportionately use. Restaurant dining, landscaping and construction are all examples. Immigrants have created a larger labor pool, which holds down both wages (hurting workers) and prices (helping upper-income people who dine out frequently and live in large homes with nice yards). As Deaton says, the expanded pool of landscape workers has been good for the well-heeled residents of Princeton, N.J.

The promise of the United States as a beacon for (as Emma Lazarus famously put it) the world’s “huddled masses yearning to breathe free” remains a stirring one. It may be the most noble American creed of all. We are a nation of immigrants, and we derive enormous benefits from that status. But even during peak periods of immigration, the United States has admitted only a tiny fraction of the people who would prefer to live here than in their home countries. Global polling by Gallup estimates that nearly one billion of the globe’s eight billion people would like to migrate, and the United States is their most desired destination. The question has always been what small percentage of would-be Americans this country will choose to accept.

How Democrats Lost Their Way on Immigration
By Isaac Chotiner

Of all the issues that Democrats had real control over in 2024, do you think immigration was the one they screwed up the most?

Yes, and I think the second half of that is important. I think the Democrats had three really problematic issues. Inflation, which is probably the biggest, and I think they had some control over that, but it was limited. Then there was Biden’s age and the notion that he and the people around him, including [Vice-President Kamala] Harris, weren’t especially honest about it, and they probably had some control over that, too.

Yeah, they had some control over that.

Yes. And then the third is immigration. And under President Biden, we had the largest surge of immigration over a short period in American history. The pace was even faster than the peak pace of the Ellis Island years of the late nineteenth and early twentieth century. We had eight million net immigrants come into the country. It appears that about five million of them entered illegally, a vastly faster pace than under Trump or Obama. You had Biden telling people during the 2020 campaign that he wanted more people to come to the country and then loosening a whole bunch of policies. Almost immediately, immigration surged. And it was never what most American voters wanted. It particularly was not what lower-income voters across races wanted. It was unpopular from the beginning. And it happened in large part—not exclusively, but in large part—because of the policies they enacted.

You write about how the academic left has hurt the messaging and policies of parties across the developed world. But are the parties that have struggled with the issue really part of the academic left? Germany has had Chancellors from both the Social Democratic Party and the center right in the past ten years, and it was the latter, Angela Merkel, who oversaw a huge rise in immigration.

That’s part of why Denmark is so interesting to me. I agree with you that it’s not just the Brahmin left that drove the shift toward a much more open immigration policy in the past several decades in Europe, the United States, and Canada. It’s also the corporate right; business executives have always liked high immigration. It’s been a consensus of mainstream parties that more immigration is some combination of desirable and inevitable. It’s Angela Merkel in the center right, it’s Tony Blair in the center left. In a lot of European countries, the share of the foreign-born population was around five per cent a few decades ago, and now it’s often fifteen per cent. That’s an extremely rapid and large shift.

And what’s so interesting about Denmark is: you have the center left say, No, we’re really going to slow immigration down. We think the levels of immigration that we’ve had are bad for working-class people. We think they strain our social services, they strain our schools, they create more wage competition, and those costs fall overwhelmingly on working-class people. And so, to me, what’s important about Denmark is that there was a party where people who were concerned about immigration could go that wasn’t on the extreme right. And once the Danish center left provided that place, it really marginalized the far right.

Trump is going to overreach. He is going to treat immigrants cruelly in ways that most Americans will not support. He did so eight years ago, and he’s doing so again. And that’s going to create an opportunity for Democrats. After the chaos of the Biden years, when you looked at polls, you saw that people were deeply unhappy with Biden’s policy and really wanted the level of immigration to be cut in this country. But still more Americans said that immigration was more of a strength of our country than a weakness.

I think the problem has been that you have had this élite wing of the Democratic Party that has argued that immigration is a free lunch and has been pretty disdainful of the concerns of working-class people of all races. You actually did an interview with someone from an immigration think tank in which he claimed that immigration is all benefit and no cost, which is a remarkable claim, and it’s not consistent with the evidence. Immigration has huge benefits, and it really does have meaningful costs. And those costs fall disproportionately on working-class people. And so I don’t know exactly what a winning message on immigration is, but it’s going to have to be one that is more moderate than what the Party’s has been and one that acknowledges the complications that immigration has huge benefits both for immigrants and for the country but it also has meaningful downsides.

New Insights on Why Harris Lost—and Why Democrats Are in Such a Hole
By Michael Baharaeen

6. Democrats must contend with the fact that America is a center-right country.

This is something I have written about extensively elsewhere. Shor explains:

Fundamentally, 40 percent of the country identifies as conservative. Roughly 40 percent is moderate, 20 percent is liberal, though it depends exactly how you ask it. Sometimes it’s 25 percent liberal. But the reality is that, to the extent that Democrats try to polarize the electorate on self-described ideology, this is just something that plays into the hands of Republicans.

This doesn’t, however, mean that only 20 percent of the country supports the liberal view on all issues. In fact:

If you look at moderates—and especially nonwhite moderates—a bunch of them hold very progressive views on a variety of economic and social issues. A very large fraction of Trump voters identify as pro-choice. We’ve seen populist economic messaging do very well in our testing with voters of all kinds.

But Shor offers some key warnings for Democrats, and specifically for their more liberal-leaning voters:

I think that there are also some big cultural divides between highly educated people who live in cities and everybody else. And to the extent that we make the cultural signifiers of these highly educated people the face and the brand of our party, that is going to make everyone else turn against us.

And:

It’s not just that the New York Times readers are more liberal than the overall population—that’s definitely true. It’s that they’re more liberal than they were four years ago—even though the country went the other way.

The country’s center-right tilt has also made it especially difficult for Democrats to have sustained success in the U.S. Senate, the body of Congress that, among other things, is responsible for confirming presidential appointments to the Supreme Court. Even in 2018 and 2020, two great years for Democrats, they struggled to make gains in the upper chamber, and following 2024, they are now under majority status by three seats (or four if you take into account the fact that Republicans have the tie-breaking vote). Worse still, the path toward regaining the majority looks difficult.

Shor outlines what voters seem to want from Democrats moving forward: moderating on the culture war while speaking to people’s economic pain and desire to improve their material conditions. Blue Rose’s polling routinely found that inflation was the top issue, and also showed that voters who were frustrated about the economy were overwhelmingly looking for a “shock to the system.”

Can We Have a ‘Party of the People’?
By Nicholas Lemann

Countries with parliamentary systems can have social-democratic parties, nationalist parties, green parties, ethnic parties, business parties, regional parties, religious parties, feminist parties, agricultural parties, and so on, which can fall into and out of coalitions with one another. The United States has a peculiarly durable two-party system that makes this process invisible because it takes place behind a deceptive façade that presents to the world one party for liberals and one for conservatives. Figuring out where American politics is moving ideologically requires establishing better definitions than thinking merely in the most broad and obvious terms that the two parties offer us.

One way to understand the professional, or limousine, liberals is to see them as comparable to the old, and vanished, liberal wing of the Republican Party, now reborn as a visible and influential wing of the Democratic Party.

Limousine liberals are well educated, confident, and more closely attuned to issues like racial justice, environmentalism, feminism, human rights abroad, and cultural tolerance than to the economic welfare of laboring people in the United States.

Winning this group over has helped the Democrats financially and electorally, but in politics any mass defection has a strong effect on both the party the defectors left and on the party they joined. Gaining southern and evangelical voters (two overlapping categories) helped the Republicans win elections; but those Republican gains entailed adopting policies on issues like abortion and guns that drove many of the party’s educated liberals into the arms of the Democrats—who then moved right on economic issues by way of accommodating them.

How to fight a class war
By Michael Lind

Manufacturing industries along with the working class have been driven out, by prices and regulations, from the cities, such as New York and London, that the college-credentialled overclass seeks to reconstruct as clean, walkable, resort-like settings for high-end consumption by professionals in business and finance. Whatever the merits of America’s Green New Deal, the EU’s Green Deal, and the British equivalent may be, the costs of a rapid transition to “net zero” carbon emissions fall much more heavily on the employment and consumption of non-college-educated workers of all races, as well as on the manufacturing and agricultural industries in which they are over-represented.

At the same time, these beleaguered constituencies have lost their traditional voice in social democratic parties and have yet to be adequately represented by parties of the right in Europe and North America. It was not that long ago that industrial workers and farm workers and small farmers were championed by the radical as well as the moderate left.

Centrist social democracy, too, rested on farmer-labour alliances. The New Deal coalition that dominated US politics for half a century after the Great Depression was based on factory workers in the industrial north-east and farmers in the southern and western periphery.

But as the shares of the electorate and workforce accounted for by farmers and industrial works have declined, parties of the centre left have dissolved their own constituency and elected a new one, consisting of two kinds of service-sector workers: highly-educated, highly-paid professionals and managers concentrated in a small number of metropolitan areas, and a supporting cast of disproportionately foreign-born service workers and personal servants such as maids, nannies and gardeners. Meanwhile, working-class and rural citizens are becoming aligned, geographically and culturally. The combination of high property costs and low wages in European and American cities have forced most members of the working class into suburbs and exurbs or small towns, with big cities rescued from depopulation only by a constant influx of immigrants to replace the citizens and naturalised immigrants who move out.

The rural workforce is less likely to have a college education than metropolitan residents. In Europe as well as North America, the old dichotomy of rural farmers and urban proletarians has given way to a new “rurban” or “exurban” economy in which less-educated people work in a variety of occupations.

While non-metropolitan areas are becoming more diverse in occupational structure and ethnicity, cultural divisions are hardening between their residents and the upscale inhabitants of downtowns and inner suburbs.

Furthermore, attempts to use the reshoring of industry, green and otherwise, to rebuild something like the old farmer-labour coalition to the left of centre in Europe and America may be doomed by the abrupt tendency of the metropolitan elite to take up new causes such as transgender rights and Black Lives Matter-style “equity”, and immediately seek to impose their newly adopted values on the rest of the population by government policy and private economic coercion. Most of the members of the multiracial working class have moved out of dense urban neighbourhoods, only to find themselves subject in their new homes to ceaseless moralistic hectoring and coercive social engineering efforts by the affluent urbanites who dominate national government.

The greater dependence of society on disproportionately rural and exurban essential service workers gives them leverage that hair stylists and house-cleaners lack. Their ability to control choke points in food production and the distribution of goods of all kinds gives workers and small producers in agriculture and logistical industries leverage that has been lost to workers in factories, because of how the latter can be replaced by foreign factories if they seek to unionise or improve their union contracts. Instead of shutting down only a single establishment, these producers can threaten to shut down the entire economy.

Many rural and exurban residents may rally around protesters in occupations other than their own, based on a shared resentment of the aggressive overreach of elite urbanites.

The Re-Skilling of America
By Michael Lind

In 2022, 37.6% of adults without a disability had at least a bachelor’s degree. In 1990 only 20% of the older-than-25 population had a bachelor’s degree, and in 1970 the share was 11%. And yet according to the Strada Institute for the Future of Work, a decade after graduation with four-year degrees 45% of Americans work in jobs that do not require college diplomas.

Today a worker earning between $40,000 and $60,000 in inflation-adjusted 2022 dollars is as likely to have a bachelor’s degree as a worker in 2006 who earned between $60,000 and $80,000, when there were fewer college graduates as a share of the workforce.

From the employer’s perspective, weeding out job applicants in favor of college graduates on the assumption that at least someone with a B.A. is likely to show up on time and complete assigned tasks may make sense. But wasting four or more years in college at a cost of $100,000 and up is a wildly inefficient way for graduates to prove they are more punctual and harder working than their peers. Worse, using college degrees as a simple sorting mechanism discriminates against the majority of Americans, whether from inner cities or rural areas, whose education ends with high school or some college, for a mix of cultural and economic factors that have no strong relation to either native intellect or the capacity for work discipline.

The Broken Economic Order
By Mariana Mazzucato

Declining support for the Democratic Party among working-class voters reflects a deep disenchantment with an economic system that, under administrations led by presidents of both parties, has concentrated wealth at the very top, enabled the growth of the financial sector at the expense of the rest of the economy, trapped people in cycles of debt, and deprioritized the well-being of millions of Americans.

Decades of economic policies that weakened labor laws, underinvested in education and health care, and bolstered the financial services sector have perpetuated structural inequalities in the United States. Biden arguably did more than most of his recent predecessors to address stagnant wages and a high cost of living, including bringing down inflation from 9.1 percent in June 2022 to 2.4 percent in September 2024 and signing an executive order to ensure a $15 per hour minimum wage for federal government employees and contractors. But like those predecessors, he left unresolved many underlying problems: wealth and income inequality, high rates of personal debt, uneven access to high-quality education and health care, inadequate labor laws, and the financial sector’s expanding share of and influence over the economy.

The issue is not poor economic performance. Average GDP growth under Biden was approximately the same as it was during Trump’s first term, and the United States’ pandemic recovery was the strongest in the G-7. The U.S. economy added almost 15 million jobs between January 2021 and January 2024; in the first three years of Trump’s administration, by contrast, fewer than seven million jobs were added.

But critically, economic growth has not translated to improved circumstances for many Americans. According to the latest census data, 36.8 million people—11 percent of the U.S. population—lived in poverty in 2023. As of June 2023, 43.6 million Americans held an average student loan debt of approximately $38,000 per borrower. Americans’ economic frustrations have been compounded by inflation, which increased during the first two years of Biden’s administration to a peak above nine percent in June 2022. When supply bottlenecks emerged because of the pandemic and Russia’s invasion of Ukraine, companies jacked up prices for food, energy, and other goods, worsening inflation. And perhaps most important, wage growth has stagnated: average weekly earnings rose under Biden but not enough to keep up with inflation.

To build an economy that works for all, public investment is critical. Private investment in domestic production will not happen without government investment, and businesses left to their own devices will not necessarily invest in ways that benefit working people.

American Capitalism Must Reorient Toward the Long Term
By David Teece and Aurelien Portuese

Corporate America is caught in a destructive cycle of short-termism, with executives rewarded for quarterly stock performances at the expense of long-term investments. Nowhere is this more evident than in the technology industry. Consider the case of Nvidia. Despite tremendous results in the sales of its innovative chips, and despite the critical importance for a U.S. company to fiercely compete with Chinese competitors, Nvidia recently announced using $11 billion in cash to buy back stocks to appease shareholders. It should have reinvested these profits into developing AI chips and operational fortitude at a time when bolstering innovation and security is critical to outrun China’s own rapid innovation and reduce supply chain risks that shuttered globalized operations during COVID and raised prices for businesses and consumers. This retreat from future-planning reflects a larger failure to look beyond the demands of shareholder activists who are usually short-termers only interested in a quick buck. In doing so, Nvidia risks going down the path of Intel and Boeing, who spent decades of profit on multiple buybacks instead of research and development and quality maintenance and are now competitively stricken.

Another example: Apple underinvested in the artificial intelligence revolution and is now a bit player. To try and catch up, Apple announced a joint venture with Open AI. At the same time, it announced an unprecedented $110 billion stock buyback. Apple has been forced to depend on an AI startup to secure a position in the most important and rewarding frontier market when it should have been an AI pioneer itself. The $110 billion buyback underlines Apple’s flawed corporate strategy and self-inflicted dependency on competitors to access the next important markets.

Prioritizing dividends and buybacks when there are rich opportunities to invest for the future is an unfortunate corporate choice that undermines workers and the global competitiveness of American capitalism. Investments that could fuel Nvidia’s and Apple’s growth are being pushed aside in favor of short-term rewards for shareholders and today’s management. The consequences of this mindset are obvious: less innovation, fewer bold bets on transformative technologies, and a gradual loss of America’s technological leadership and competitive advantage in global markets.

The real threat to American prosperity
By Daron Acemoglu

American economic success in the era after the second world war depended on innovation, which in turn relied on strong institutions that encouraged people to invest in new technologies, trusting that their inventiveness would be rewarded. This meant a court system that functioned, so that the fruits of their investments could not be taken away from them by expropriation, corruption or chicanery; a financial system that would enable them to scale up their new technologies; and a competitive environment to ensure that incumbents or rivals couldn’t block their superior offerings. These kinds of institutions matter under all circumstances, but they are especially critical for economies that rely heavily on innovation.

Stability requires that people trust institutions, and institutions become more likely to fail when people think they are failing.

Democracy’s bargain everywhere, and especially in the US, was to provide shared prosperity (economic growth out of which most people benefited), high-quality public services (such as roads, education, healthcare) and voice (so that people could feel they were participating in their own government). From around 1980 onwards, all three parts of this bargain started to fall away.

Economic growth in the US was rapid for most of the post-1980 era, but about half of the country didn’t benefit much from this. In a pattern unparalleled in the industrialised world, Americans with less than a college degree experienced a real (inflation-adjusted) decline in their wages between 1980 and 2013, while those with postgraduate degrees experienced robust growth.

It wasn’t only income. Postgraduates and those in specialised “knowledge” occupations increased their social standing relative to blue-collar workers and traditional office employees. Many regions of the country were gripped by long-lasting recessions as cheap imports from China and new technologies destroyed jobs, while major coastal, globally hyperconnected metropolitan centres continued to flourish. Another dimension of inequality was similarly jarring: a rapidly multiplying number of multibillionaires, not just flaunting their wealth but exercising ever greater influence over politics and people’s lives.

Many Americans felt that they no longer had much of a political voice. In surveys, more than 80 per cent started saying that politicians did not care about what people like them thought. They also reported incredibly low levels of trust in all branches of government, in courts, in the police and in the bureaucracy.

Joe Biden was elected as president in November 2020 in part to restore stability to US institutions and strengthen democracy. He boasted that in its first 100 days, his administration “acted to restore the people’s faith in our democracy to deliver”. But polarisation took its toll on Biden’s presidency.

Democratic party activists interpreted the 2020 election results as a mandate to adopt a radical agenda of social change throughout US society, some of it starting in federal or local governments and some of it emanating from universities and non-governmental organisations, though empowered by the knowledge that the party in government favoured this agenda. Biden was arguably too weak or too beholden to the various parts of his coalition to chart a different course. For many, much of this felt like top-down social engineering, and was one of the factors that brought Trump back to power in 2025.

Biden, who had four years ago made defence of democracy a main agenda item, pre-emptively pardoned his family and a number of politicians and public servants, including former Republican Congresswoman Liz Cheney and the former medical adviser to the president, Anthony Fauci. The optics were clear and ugly: Biden and his camp by this point had so little trust in US institutions that they thought only such pre-emptive pardons could stop Trump’s retribution (and making the reality worse than the optics, it was only the enemies of Trump who were close to Biden that counted).

Symbols matter, especially when it comes to institutions. Once it becomes accepted that institutions are not functioning and cannot be trusted, their slide intensifies and people are further discouraged from defending them. We could see this dynamic already in the late 2000s, interwoven with polarisation. Trust in institutions took a severe beating after the financial crisis of 2007-09, precisely because the conceit of a well-regulated, expertly run economy came crashing down. Understandably, many Americans reacted negatively when the government rushed to rescue banks and bankers while doing little to help homeowners in bankruptcy or workers who had lost their jobs. The inequalities that had formed became much more visible, partly because the lavish lifestyles of the bankers rescued by the government became a symbol for the gulf that had opened between regular working people and the very rich.

Similarly, the critical state of US institutions became much clearer after Biden’s cynical pardons, sending a signal to millions that his administration’s defence of democracy was a charade.

The End of Democratic Capitalism?
By Daron Acemoglu

From the early 1940s to the 1970s, the fruits of economic growth were broadly shared. Real wages grew rapidly—on average, by more than two percent every year for both high-skilled and low-skilled workers. And from the end of World War II to 1980, overall inequality fell substantially. Since 1980, however, real wages have continued to rise among workers with postgraduate degrees and specialized skills but have stagnated or even declined for workers, especially men, who have only a high school degree or no degree at all. In the meantime, the share of total income going to the richest one percent of households has nearly doubled—from ten percent in 1980 to 19 percent today. To put it simply, the United States abandoned shared prosperity in favor of a model in which only a minority of people benefit from economic growth while the rest are left in the dust.

The situation is less dire in many other Western countries, thanks to higher minimum wages, collective bargaining, and social norms against inequality in the workplace. All the same, most industrialized countries have seen the real earnings of low-education workers stagnate or decline while the rich have gotten richer.

If democracies are truly meritocratic, then people who succeed deserve their success, while those who fail deserve their failure. Of course, no society is truly meritocratic. Privilege (or the lack thereof) shapes the lives of most people. As the Harvard philosopher Michael Sandel has emphasized, the illusion of meritocracy has had pernicious effects: many Americans who have seen their real incomes decline or stagnate are being told, implicitly or explicitly, that their misfortune is their own fault. It is no surprise, then, that many of those left behind now reject the democratic institutions emblematic of the kind of meritocracy that blames struggling people for their own plight.

For example, U.S. politicians touted both the North American Free Trade Agreement and China’s integration into the World Trade Organization as beneficial not just to U.S. companies but ultimately to all Americans. The same figures also kept reassuring the public that it would soon reap the rewards, thus inflating aspirations and paralyzing efforts to build better institutions to deal with the disruptive effects of new technologies and globalization. Worse, many of these policies were presented as technocratic, scientifically supported truths. This misrepresentation facilitated the acceptance of these policies in the short run. It also further contributed to the decline of trust in state institutions and experts in the longer run.

Although it is clear that this decline in trust has led people in democracies to lose faith in their institutions, it is less clear why the disenchanted have turned toward right-wing populism and authoritarianism rather than to left-wing alternatives.

Regimes dubbed right-wing populist, authoritarian, majoritarian, or religiously conservative, including those in India and Turkey, are actually first and foremost nationalist in their orientation. Leaders exploit patriotic feelings to boost their popularity—and their control over the population. Such is also the case in China, where school curricula and media propaganda have stoked nationalist sentiment.

Globalization appears to play a major role in the resurgence of nationalism. It has created new inequalities, by allowing companies to avoid taxes and by failing to contribute to job creation domestically, and has deepened tensions, because it challenges social norms via the spread of ideas through the Internet, movies, television, and music.

Throughout history, control of technology has determined how the gains from economic growth are shared. When landlords in medieval Europe controlled the most important technology of the era, such as water and wind mills, they ensured that improvements in productivity enriched them, not their workers. In the early stages of the Industrial Revolution, when entrepreneurs rapidly introduced automated production processes and corralled workers, including women and children, into factories, they benefited, while wages stagnated and may even have fallen.

Fortunately, it is possible to change who controls technology and thus alter its application, especially in terms of whether it will disempower workers and automate work or increase worker capabilities and productivity. The reason Western countries have become much more unequal is that they have allowed a small group of entrepreneurs and companies to set the direction of technological change according to their own interests—and against those of most workers.

Modern market economies need to be fundamentally reformed; otherwise, companies will continue to overinvest in the kind of automation that replaces workers rather than enhances their productivity. Companies are also likely to double down on massive data collection and surveillance, even though these activities are anathema in a democracy.

It is up to governments to regulate and redirect technological change. If companies continue to automate without investing in training and technologies that could help workers, inequality will continue to worsen, and those at the bottom will feel even more disposable. To prevent such an outcome, policymakers must determine which broad classes of technologies can be helpful to workers and deserve public support. They also need to regulate the tech industry, including its powers to collect data, advertise digitally, and create large language models, such as the artificial intelligence chatbot ChatGPT. And the government must give workers a voice in the process of regulating tech companies. That does not mean the government should allow labor unions to block technological change; rather, it should ensure that worker representatives can negotiate how technology is being used in workplaces.

But such regulation is very hard to devise because policies over the last four decades have destroyed trust in state institutions. It is even harder when the labor movement has been gutted and the pillars of democratic citizenship have weakened.

Democratic capitalism is indeed in crisis. Any solution must begin with a focus on restoring public trust in democracy. People in democracies are not, in fact, helpless: there are ways to create a fairer type of economic growth, control corruption, and curb the excessive power of large companies, as the economist Simon Johnson and I have argued. This will not only help reduce inequality and lay the foundations of shared prosperity; it will also demonstrate that democratic institutions work—ensuring that this crisis of democratic capitalism does not spell democracy’s end.

Reviving the American Dream
By David Leonhardt

For all the cynicism about politics today, it is worth remembering how often grass-roots political movements in the U.S. have managed to succeed. In the 1920s and 1930s, the country had a highly unequal economy and a Supreme Court that threw out most policies to reduce inequality. But activists — like A. Philip Randolph, a preacher’s son from Jacksonville, Fla., who took on a powerful railroad company — didn’t respond by giving up on the system as hopelessly rigged.

They instead used the tools of democracy to create mass prosperity. They spent decades building a labor movement that, despite many short-term defeats, ultimately changed public opinion, won elections and remade federal policy to put workers and corporations on a more equal footing. The rise of the labor movement from the 1930s through the 1950s led to incomes rising even more rapidly for the poor and middle class than for the rich, and to the white-Black wage gap shrinking.

One big lesson I took from my research was the unparalleled role of labor unions in combating inequality (a role that more Americans seem to have recognized recently).

There are plenty of other examples of grass-roots movements remaking American life. The civil-rights and women’s movements of the 1960s also overcame long odds, as did the disability-rights movement of the 1970s and the marriage-equality movement of the 2000s.

Other examples come from the political right. In the 1950s and 1960s, a group of conservatives, including Milton Friedman and Robert Bork, began trying to sell the country on the virtues of a low-tax, light-regulation economy. For years, they struggled to do so and were frustrated by their failures. Friedman kept a list of newspapers and magazines that did not even review his first major book.

But the conservatives kept trying — and the oil crisis that began 50 years ago last week eventually helped them succeed. A politician who embraced their ideas, Ronald Reagan, won the presidency and moved the U.S. closer to the laissez-faire ideal than almost any other country.

The conservatives who sold this vision promised it would lead to a new prosperity for all. They were wrong about that, of course. Since 1980, the U.S. has become a grim outlier on many indicators of human well-being. But the conservatives were right that overhauling the country’s economic policy was possible.

This history does not suggest that the political system is hopelessly broken. It instead suggests that the U.S. doesn’t have a broadly prosperous economy largely because the country has no mass movement organized around the goal of lifting living standards for the middle class and the poor. If such a movement existed, it might well succeed. It has before.

A New Centrism Is Rising in Washington
By David Leonhardt

The very notion of centrism is anathema to many progressives and conservatives, conjuring a mushy moderation. But the new centrism is not always so moderate. Forcing the sale of a popular social app is not exactly timid, nor is confronting China and Russia. The bills to rebuild American infrastructure and strengthen the domestic semiconductor industry are ambitious economic policies.

A defining quality of the new centrism is how much it differs from the centrism that guided Washington in the roughly quarter-century after the end of the Cold War, starting in the 1990s. That centrism — alternately called the Washington Consensus or neoliberalism — was based on the idea that market economics had triumphed. By lowering trade barriers and ending the era of big government, the United States would both create prosperity for its own people and shape the world in its image, spreading democracy to China, Russia and elsewhere.

That hasn’t worked out. In the U.S., incomes and wealth have grown slowly, except for the affluent, while life expectancy is lower today than in any other high-income country. Although China, along with other once-poor countries, has become richer, it is less free — and increasingly assertive.

The new centrism is a response to these developments. It is a recognition that neoliberalism failed to deliver. The notion that the old approach would bring prosperity, as Jake Sullivan, Biden’s national security adviser, has said, “was a promise made but not kept.” In its place has risen a new worldview. Call it neopopulism.

Both Democrats and Republicans have grown skeptical of free trade; on Tuesday, Biden announced increased tariffs on several Chinese-made goods, in response to Beijing’s subsidies. Democrats and a slice of Republicans have also come to support industrial policy, in which the government tries to address the market’s shortcomings. The infrastructure and semiconductor laws are examples. These policies feel more consistent with the presidencies of Dwight Eisenhower or Franklin Roosevelt than those of Ronald Reagan or Bill Clinton.

The term neopopulism is apt partly because polls show these new policies to be more popular than the planks of the Washington Consensus ever were. Decades ago, politicians of both parties pushed for liberalizing global trade despite public skepticism. In retrospect, many politicians and even some economists believe that Americans were right to be skeptical.

As was the case during the 20th century, another important factor is an international rivalry. Then, it was the Cold War. Now, it is the battle against an emerging autocratic alliance that is led by China and includes Russia, North Korea, Iran and groups like Hamas and the Houthis.

In part, this fusing of right and left is a sign that politicians are reacting rationally to voters’ views. Many political elites — including campaign donors, think-tank experts and national journalists — have long misread public opinion. The center of it does not revolve around the socially liberal, fiscally conservative views that many elites hold. It tends to be the opposite.

Americans lean left on economic policy. Polls show that they support restrictions on trade, higher taxes on the wealthy and a strong safety net. Most Americans are not socialists, but they do favor policies to hold down the cost of living and create good-paying jobs. These views help explain why ballot initiatives to raise the minimum wage and expand Medicaid have passed even in red states. They also explain why some parts of Biden’s agenda that Republicans uniformly opposed, such as a law reducing medical costs, are extremely popular. “This is where the center of gravity in the country is,” Steve Ricchetti, a top White House official, told me.

The story is different on social and cultural issues. Americans lean right on many of those issues, polls show (albeit not as far right as the Republican Party has moved on abortion).

The clearest example in the Biden era is immigration. A core tenet of neoliberalism, once supported by both parties, is high immigration. Along with the freer movement of goods and capital, neoliberalism calls for the freer movement of people.

Most voters, especially working-class voters, feel differently. The soaring level of immigration during Biden’s presidency, much of it illegal, has become a political liability

For decades, Washington pursued a set of policies that many voters disliked and that did not come close to delivering their promised results. Many citizens have understandably become frustrated. That frustration has led to the stirrings of a neopopulism that seeks to reinvigorate the American economy and compete with the country’s global rivals.

Geopolitics in the C-Suite
By Jami Miscik, Peter Orszag and Theodore Bunzel

The centrality of economic competition to today’s foreign policy problems represents a qualitative break from the past. During the Cold War, for example, the United States and the Soviet Union hardly interacted economically: trade between them peaked at a paltry $4.5 billion in 1979; in recent years, the United States and China have generally traded that much every week or two, adjusting for inflation. In the post–Cold War era, U.S. foreign policy was focused on opening markets and reducing international economic barriers rather than erecting them. Era-defining crises such as the 9/11 attacks did little to change the relationship between U.S. policymakers and American corporations; if anything, the “war on terror” further solidified the idea that foreign policy was primarily concerned with security and military issues, not economics.

But in the background, global economic integration was transforming the playing field. In 1980, trade accounted for just 37 percent of global GDP. Today, that figure is 74 percent, and economies have become intertwined to a degree never seen in the twentieth century. Globalization is not new, of course; it has been a centuries-long process. What is new, however, is the emergence of great-power rivalry in a highly interconnected world. Military power still matters, but economic and technological competition have become the main battlefield of global politics.

Greater economic integration has also created a complex web of links between geopolitical rivals that policymakers now seek to leverage for strategic ends. This is especially true when it comes to financial and technological networks, where Washington holds a privileged position. As noted by the scholars Henry Farrell and Abraham Newman in their recent book Underground Empire, the United States sits at the center of a vast informational plumbing system, built almost haphazardly over decades, that allows the global economy to function. The ubiquity of the dollar in global transactions, U.S. control of critical Internet infrastructure, and the dominance of American companies when it comes to the intellectual property rights behind the most important technology have allowed Washington to coerce or target geopolitical rivals, often through sanctions.

But as great-power tensions have increased, so has the number of sectors caught in the fray of what Farrell and Newman call “weaponized interdependence.” Consider, for example, the way that G-7 countries have taken advantage of Russian dependence on shipping insurers based in the West, an industry that most foreign policymakers had probably never thought about before Russia’s 2022 invasion of Ukraine. To try to cap the price of Russian oil exports, the G-7 prevented these companies from insuring Russian crude oil cargoes unless they had been sold at a maximum of $60 per barrel.

Western powers are not the only ones playing this game. In 2010, after a Chinese fishing trawler and Japanese Coast Guard patrol boats collided in disputed waters, setting off a diplomatic row between Beijing and Tokyo, China banned exports to Japan of the rare-earth minerals that are critical components of batteries and electronics, thus raising costs and creating shortages for Japanese manufacturers of everything from hybrid cars to wind turbines.

As governments tinker with complex supply chains and technology ecosystems built over decades, the choices and conduct of thousands of corporate actors will make it harder to achieve policy goals. And given an inherently limited toolkit and the myriad nuances of each industry, the U.S. government can’t possibly think of every conceivable workaround or contingency for a specific sanction or export control. Washington will have to rely on companies to adhere to the spirit of policies, rather than just the letter. Even if most companies comply with new rules at first, over time, some will find ways to get around restrictions and overcome hurdles; regulators and lawmakers will need to be vigilant. And U.S. rivals will hardly sit idly by. After the West severed nearly every economic interaction with Russia in 2022, Moscow soon found alternative sources of supply from China: Russian imports from Beijing have surged 64 percent since 2021.

Governments can convey their intent to reduce foreign economic dependencies, but their means are limited. Subsidies and other financial breaks are too small to fully rewire embedded supply chains that were built over decades. And more extreme policies such as import bans risk shortages and price spikes, not to mention full-blown trade wars that could devastate entire sectors.

Tariffs without industrial policy won’t work
By Rana Foroohar

As trade expert and former US China Economic and Security Review Commission member Michael Wessel puts it: “Major public firms look to investment metrics that are often five years or longer. No one knows how long tariffs may last either during this administration, or beyond.

“Without the industrial policies in place, markets may not have the confidence” to pour money back in to the US, particularly in areas like manufacturing or energy, which have even longer timelines for return on investment.

Even if the Trump administration was being clear about exactly where it wants to build capacity, it would need to go much deeper on tariff design to guard against things like “tariff inversion”, when duties on imported component parts end up being higher than on finished goods, hurting domestic manufacturers.

Then there are the inventory issues. US companies tend to keep very little inventory on hand because of just-in-time production models. That matters a great deal when there are sudden retaliatory limits on rare earth minerals from China, or export bans by places like Democratic Republic of Congo — one of the only other countries where the critical mineral cobalt can be sourced. As one risk analyst told me, these types of disruptions can collide to shut down production in areas such as electrical vehicles, medical devices and aerospace materials.

Guess who’s in store for pain — and gain — under Trump’s economic plans
By Lael Brainard

One thing is certain: Higher broad-based tariffs mean more pain for America’s working families. The tariffs that have been proposed and imposed so far mean higher costs for groceries and the purchase of a car; they mean higher costs for housing materials, putting homeownership further out of reach for many Americans. These are not “cheap baubles,” as some administration officials have suggested, but basic necessities. American consumers are already stressed by high egg prices. Their grocery budgets would be further squeezed by the proposed tariffs on fresh fruits and vegetables from Mexico.

Some officials will tell you that Americans will get so much from tax cuts that the tariff-driven increases in prices won’t matter. But that is true only for wealthy households: The average annual tax cut from extending Trump’s cuts is more than $43,000 for the top 1 percent of earners (those making more than $646,000 a year). These cuts alone will cost more than $1 trillion over the 10-year budget window.

To pay for those top-earner tax cuts, the House prescribes about $1 trillion in cuts to Medicaid and food aid that will result in families paying more for groceries and health care. Many will lose health coverage altogether. Low- and moderate-income families would actually lose between $430 and $1,130 on average each year when the total effects of the cuts in health care and food assistance and the tax cuts in the House package are considered, while middle-income families would see a net benefit of only about $350 per year, a tiny fraction of what Americans at the top of the income ladder would gain.

Some families would be more deeply affected. When a low-income family’s food assistance through SNAP is taken away, it might not be able to pay for both groceries and rent. And someone who can’t afford cancer treatment or diabetes medications because their Medicaid coverage has been terminated could suffer catastrophic consequences.

And none of this includes the income losses resulting from the tariffs. Once you factor in the hit that families will take from the proposed and imposed tariffs on our top three trading partners, the combined effect of the tax package and tariff plans means that families at the middle of the income distribution would lose about $550 per year. Those with incomes in the lower 40 percent of the distribution would lose between $1,030 and $1,670. The top 1 percent of earners, on the other hand, would enjoy a net gain of $35,630. And the income losses for low- and middle-income families would grow larger once we factor in the tariffs on autos and the coming “reciprocal tariffs” on imports from other countries that Trump will announce this week.

The next time you hear that the gains from the combination of tariffs and the budget plans are worth the pain, keep in mind: Working families will bear the pain, while those who are already doing very well will get the gains.

The Age of Miraculous Growth Is Over. Trump’s Tariffs Were Just the Last Straw.
By Dani Rodrik

For more than seven decades, many countries have followed the same path out of poverty: make things, and sell them abroad. This strategy, known as “export-oriented industrialization,” was rooted in domestic reforms but turbocharged by the opening of the world economy under American leadership after the Second World War.

The model for poor countries hoping to grow rich was pioneered by the four Asian Tigers: Taiwan, South Korea, Hong Kong and Singapore. In the 1950s, they were about as poor as African nations and heavily dependent on agricultural products and raw minerals. Trade allowed them to specialize in increasingly sophisticated manufactured goods for sale on world markets — toys, clothes, steel, cars, electronics and now semiconductors — while providing them with access to advanced technologies and machinery.

China soon followed suit, perfecting that strategy to become the world’s manufacturing powerhouse, lifting 800 million people out of extreme poverty and emerging as America’s chief rival. (China’s strategy may have worked too well: By the time Joe Biden became president, “beating” China had become a bipartisan imperative.)

After labor costs in China rose, some foreign investment moved to Vietnam, which had also started down this path in the 1980s and could well be the last significant success of the export-oriented model.

Given the almost miraculous transformation of these countries, it is no surprise that other poor nations around the world seek to emulate them. Export-oriented industrialization has become a buzzword for growth experts and multilateral organizations, and an imperative for policymakers in poor nations’ capitals.

But since the 1990s, changes in the technology and organization of manufacturing production have made the strategy far less effective. Automation, robots and 3-D printing have made it easy to substitute machines for workers, diminishing poor nations’ most significant competitive advantage: their pool of abundant labor.

There’s a Reason the World Is a Mess, and It’s Not Trump
By Aaron Benanav

In the past, G20 economies regularly grew 2 to 3 percent per year, doubling incomes every 25 to 35 years. Today, many growth rates are 0.5 to 1 percent, meaning incomes now take 70 to 100 years to double — too slow for people to feel progress in their lifetimes. The significance of that change can’t be overstated. Stagnation does not have to be absolute to collapse expectations: When people no longer assume their or their children’s living standards will improve, trust in institutions erodes and discontent rises.

So why has growth slowed so starkly?

One reason is the global shift from manufacturing to services. This has stalled the primary engine of economic expansion: productivity growth. Productivity — the output per hour worked — can rise quickly in manufacturing. A car factory that installs robotic assembly lines, for example, can double production without hiring more workers, perhaps even firing some. But in services, efficiency is much harder to improve. A restaurant that gets busier usually needs more servers. A hospital treating more patients will require more doctors and nurses. In service-based economies, productivity is always slower to rise.

This seismic shift, in the making for decades, has a name: deindustrialization. In America and Europe, we know what that looks like: lost manufacturing jobs, amid declining demand for industrial goods. But deindustrialization is not limited to wealthy economies. The move from manufacturing to services is happening across the G20, dragging down growth rates nearly everywhere. Today about 50 percent of the world’s work force is employed in the service sector.

There’s another reason for global stagnation: slowing population growth. Birthrates surged after World War II, creating strong demand for housing and infrastructure construction and spurring the postwar boom. Demographers once assumed birthrates would stabilize at replacement level, around two children per family. Instead, fertility rates have tended to fall below this threshold. The trend, historically the result of families having fewer children but more recently of fewer people starting families, now affects Malaysia, Brazil, Turkey and even India.

This is a big problem for the economy. Shrinking workforces mean smaller future markets, discouraging businesses from expanding — especially in service-based economies, where, along with limited productivity gains, costs tend to rise. Investment falters. At the same time, a falling share of working-age people means fewer taxpayers supporting more retirees, driving up pension and health care costs and pressuring governments to raise taxes, increase debt or cut benefits.

In this stagnant setting, businesses have shifted strategies. Instead of reinvesting profits into expansion, hiring and innovation, many companies now focus on stock buybacks and dividends, prioritizing financial payouts that boost share prices and managerial compensation. The result is a vicious cycle of rising inequality, damped demand and low growth. This is happening the world over.

In the past, the primary rationale for policies that enriched wealthy households was to stimulate growth from the top down, but this strategy has evidently failed. Instead, governments could place much higher taxes on the rich and redistribute income to the rest of society. That would be an uphill battle in the United States and elsewhere, admittedly, but it would bring big benefits, improving consumer demand and strengthening markets both domestically and internationally.

The goal should be not just to raise income levels, which studies show are increasingly disconnected from happiness, but also to build more stable and equitable societies in a slower-growth world. That requires investing to improve people’s lives: repairing ecosystems, rebuilding infrastructure and expanding housing. Doing so could also help create conditions for poorer nations to pursue export-led development on fairer and more predictable terms.

How software is eating the world
By Derek Robertson

The researchers wanted to solve a puzzle: why workers’ share of total income in the economy has been steadily declining since the 1980s. While the economy has grown to once-unthinkable proportions since then, workers’ earning power has not. That gap has had profound political consequences, inspiring critiques from Bernie Sanders on the left to JD Vance on the right.

Shin and Aum’s paper argues that software is directly responsible. Their findings are stark, especially for white-collar workers: In economic terms, old-school machines actually improved productivity for workers. Software, however, flat-out replaces them.

They discovered this by studying data from South Korea, where firms track their investment in software independently of their investment in other forms of capital. They found that companies that invested heavily in software saw big productivity boosts when software became cheaper, and that those firms “tend to have lower-than-average labor shares.”

“What’s really happening is that firms, instead of hiring five secretaries, now hire only two secretaries who are knowledgeable and who can use software very well,” Shin said. “For white-collar workers over the last 25 years or so there’s been a subtle shift, demanding more investment in software skills on the side of workers while the gains in productivity don’t go back to them.”

AI will only accelerate this phenomenon, he says. While the paper’s data predates the current boom in generative AI, the authors point to another recent NBER paper that shows that it serves a similar function as a “substitute” for labor in the economy.

“It will be very high-skilled workers, a much smaller segment of workers who can really take full advantage of generative AI,” Shin said.

AI is automating our jobs – but values need to change if we are to be liberated by it
By Robert Muggah and Bruno Giussani

By 2023, Goldman Sachs projected that “roughly two-thirds of current jobs are exposed to some degree of AI automation” and that up to 300 million jobs worldwide could be displaced or significantly altered by AI.

A more detailed McKinsey analysis estimated that “Gen AI and other technologies have the potential to automate work activities that absorb up to 70% of employees’ time today”. Brookings found that “more than 30% of all workers could see at least 50% of their occupation’s tasks disrupted by generative AI”. Although the methodologies and estimates differ, all of these studies point to a common outcome: AI will profoundly upset the world of work.

Goldman Sachs estimates that 46% of administrative work and 44% of legal tasks could be automated within the next decade. In finance and legal sectors, tasks such as contract analysis, fraud detection, and financial advising are increasingly handled by AI systems that can process data faster and more accurately than humans. Financial institutions are rapidly deploying AI to reduce costs and increase efficiency, with many entry-level roles set to disappear. Global banks could cut as many as 200,000 jobs in the next three to five years on account of AI.

Ironically, coding and software engineering jobs are among the most vulnerable to the spreading of AI. While there are expectations that AI will increase productivity and streamline routine tasks with many programmers and non-programmers likely to benefit, some coders confess that they are becoming overly reliant on AI suggestions (which undermines problem-solving skills).

Anthropic, one of the leading developers of generative AI systems, recently launched an Economic Index based on millions of anonymised uses of its Claude chatbot. It reveals massive adoption of AI in software engineering: “37.2% of queries sent to Claude were in this category, covering tasks like software modification, code debugging, and network troubleshooting”.

Arguably the most dramatic impact of AI in the coming years will be in the manufacturing sector. Recent videos from China offer a glimpse into a future of factories that run 24/7 and are nearly entirely automated (except a handful in supervising roles). Most tasks are performed by AI-powered robots and technologies designed to handle production and, increasingly, support functions.

Unlike humans, robots do not need light to operate in these “dark factories”. CapGemini describes them as places “where raw materials enter, and finished products leave, with little or no human intervention”. Re-read that sentence. The implications are profound and dizzying: efficiency gains (capital) that come at the cost of human livelihoods (labor) and rapid downward spiral for the latter if no safeguards are put in place.

A 2023 study found that highly educated workers in professional and technical roles are most vulnerable to displacement. Knowledge-based industries such as finance, legal services, and customer support are already shedding entry-level jobs as AI automates routine tasks.

Technology companies have begun shrinking their workforces, using that also as signals to both government and business. Over 95,000 workers at tech companies lost their jobs in 2024. Despite its AI edge, America’s service-heavy economy leaves it highly exposed to automation’s downsides.

… a recent poll of 1,000 executives found that 58% of businesses are adopting AI due to competitive pressure and 70% say that advances in technology are occurring faster than their workforce can incorporate them.

Another new survey suggests that over 40% of global employers planned to reduce their workforce as AI reshapes the labour market.

The Rise of the Robot Reporter
By Jaclyn Peiser

Marc Zionts, the chief executive of Automated Insights, said that machines were a long way from being able to replace flesh-and-blood reporters and editors. He added that his daughter was a journalist in South Dakota — and although he had not advised her to leave her job, he had told her to get acquainted with the latest technology.

“If you are a non-learning, non-adaptive person — I don’t care what business you’re in — you will have a challenging career,” Mr. Zionts said.

For Patch, a nationwide news organization devoted to local news, A.I. provides an assist to its 110 staff reporters and numerous freelancers who cover about 800 communities, especially in their coverage of the weather. In a given week, more than 3,000 posts on Patch — 5 to 10 percent of its output — are machine-generated, said the company’s chief executive, Warren St. John.

In addition to giving reporters more time to pursue their interests, machine journalism comes with an added benefit for editors.

“One thing I’ve noticed,” Mr. St. John said, “is that our A.I.-written articles have zero typos.”

A correction was made on Feb. 5, 2019: An earlier version of this article misstated the name of a tool that Forbes is testing for reporters. It is Bertie, not Birdie.

Why some onions were too sexy for Facebook
By BBC News

The Seed Company by EW Gaze, in St John’s, Newfoundland, had wanted to post a seemingly innocent advert for Walla Walla onion seeds on Facebook.

But to their surprise, it was rejected for being “overtly sexual”.

In a statement on Wednesday, the social media company apologised for the error made by its automated technology.

The ad flagged by Facebook showed Walla Walla onions, known for their size and sweet flavour, piled in a wicker basket with some sliced onion on the side.

It took store manager Jackson McLean a moment to realise what the issue was with the posting, he said.

Then he figured out that “something about the round shapes” could be suggestive of breasts or buttocks.

He knew his customers would find the ad rejection funny, and posted the photo, along with the automated Facebook message warning “listings may not position products or services in a sexually suggestive manner”, to the company page.

Mr McLean said some clients have been posting images of potentially suggestive carrots and pumpkins in reply.

An AI avatar tried to argue a case before a New York court. The judges weren’t having it
By Larry Neumeister

In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a “good faith mistake” in failing to understand that artificial intelligence might make things up.

Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.

Commentary: AI ‘hallucinations’ are a growing problem for the legal profession
By Michael Hiltzik

Other cases underscore the perils of placing one’s trust in AI.

For example, last year Keith Ellison, the attorney general of Minnesota, hired Jeff Hancock, a communications professor at Stanford, to provide an expert opinion on the danger of AI-faked material in politics. Ellison was defending a state law that made the distribution of such material in political campaigns a crime; the law was challenged in a lawsuit as an infringement of free speech.

Hancock, a well-respected expert in the social harms of AI-generated deepfakes — photos, videos and recordings that seem to be the real thing but are convincingly fabricated — submitted a declaration that Ellison duly filed in court.

But Hancock’s declaration included three hallucinated references apparently generated by ChatGPT, the AI bot he had consulted while writing it. One attributed to bogus authors an article he himself had written, but he didn’t catch the mistake until it was pointed out by the plaintiffs.

Laura M. Provinzino, the federal judge in the case, was struck by what she called “the irony” of the episode: “Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less.”

That provoked her to anger. Hancock’s fake citations, she wrote, “shatters his credibility with this Court.” Noting that he had attested to the veracity of his declaration under penalty of perjury, she threw out his entire expert declaration and refused to allow Ellison to file a corrected version.

When A.I. Chatbots Hallucinate
By Karen Weise and Cade Metz

When did The New York Times first report on “artificial intelligence”?

According to ChatGPT, it was July 10, 1956, in an article titled “Machines Will Be Capable of Learning, Solving Problems, Scientists Predict” about a seminal conference at Dartmouth College.

The 1956 conference was real. The article was not. ChatGPT simply made it up. ChatGPT doesn’t just get things wrong at times, it can fabricate information. Names and dates. Medical explanations. The plots of books. Internet addresses. Even historical events that never happened.

Fabrications and definitive statements on uncertain history like these are common. Figuring out why chatbots make things up and how to solve the problem has become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems.

Chatbots like ChatGPT are used by hundreds of millions of people for an increasingly wide array of tasks, including email services, online tutors and search engines. And they could change the way people interact with information. But there is no way of ensuring that these systems produce information that is accurate.

The technology, called generative A.I., relies on a complex algorithm that analyzes the way humans put words together on the internet. It does not decide what is true and what is not. That uncertainty has raised concerns about the reliability of this new kind of artificial intelligence and calls into question how useful it can be until the issue is solved or controlled.

The tech industry often refers to the inaccuracies as “hallucinations.” But to some researchers, “hallucinations” is too much of a euphemism. Even researchers within tech companies worry that people will rely too heavily on these systems for medical and legal advice and other information they use to make daily decisions.

“If you don’t know an answer to a question already, I would not give the question to one of these systems,” said Subbarao Kambhampati, a professor and researcher of artificial intelligence at Arizona State University.

The new AI. systems are “built to be persuasive, not truthful,” an internal Microsoft document said. “This means that outputs can look very realistic but include statements that aren’t true.”

The chatbots are driven by a technology called a large language model, or L.L.M., which learns its skills by analyzing massive amounts of digital text culled from the internet.

By pinpointing patterns in that data, an L.L.M. learns to do one thing in particular: guess the next word in a sequence of words. It acts like a powerful version of an autocomplete tool. Given the sequence “The New York Times is a ____,” it might guess “newspaper.”

Because the internet is filled with untruthful information, the technology learns to repeat the same untruths. And sometimes the chatbots make things up. They produce new text, combining billions of patterns in unexpected ways. This means even if they learned solely from text that is accurate, they may still generate something that is not.

Because these systems learn from more data than humans could ever analyze, even A.I. experts cannot understand why they generate a particular sequence of text at a given moment. And if you ask the same question twice, they can generate different text.

But becoming more accurate may also have a downside, according to a recent research paper from OpenAI. If chatbots become more reliable, users may become too trusting.

“Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity,” the paper said.

A correction was made on May 9, 2023: An earlier version of this article referred imprecisely to ChatGPT’s response to a question about a meeting between James Joyce and Vladimir Lenin. Its response about the men having met in a cafe may have been based on accounts of such a meeting, including a fictional one, but the possible meeting at the Cafe Odéon was not entirely fabricated by ChatGPT.

Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried
By Matt O’Brien and Ali Swenson

“Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine in response to a query by an Associated Press reporter.

It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”

It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination.

The AP tested Google’s AI feature with several questions and shared some of its responses with subject matter experts. Asked what to do about a snake bite, Google gave an answer that was “impressively thorough,” said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes a hard-to-notice error is a problem.

“The more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,” said Emily M. Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “And in some cases, those can be life-critical situations.”

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
By Cade Metz and Karen Weise

Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.

In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.

“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”

These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”

The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.

When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.

Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.

Why experts are using the word ‘bullshit’ to describe AI’s flaws
By Mark Sullivan

The use of “bullshit” made more than a grabby headline; it was a reference to recently published research from a trio of philosophy professors at Glasgow University. The report, titled “ChatGPT is bullshit,” argues that calling the false output of large language models hallucination is misleading; what LLMs are really spouting, they argue, is more like bullshit. And not just any bullshit: Bullshit as defined by the late moral philosopher Harry Frankfurt in his 2005 bestseller, On Bullshit.

Frankfurt’s book is principally concerned with defining the difference between a “liar” and a “bullshitter.” A bullshitter (aka “bullshit artist”) doesn’t use facts to come off as credible and therefore persuasive, he explained, but is content to say things that sound true to get the same result. For example, a used-car salesman who is practicing bullshit uses a set of talking points he thinks will lead someone to buy a car. Some of the talking points may be true, some may be false; to him it doesn’t matter: He would use the same set of points whether they happened to be true or not. “[He] does not reject the authority of the truth, as the liar does, and oppose himself to it,” Frankfurt wrote. “He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”

In their recent report, the Glasgow researchers—Michael Townsen Hicks, James Humphries, and Joe Slater—argue that Frankfurt’s bullshit definition fits the behavior of LLMs better than the term hallucinate. In order to hallucinate, the researchers argue, one must have some awareness or regard for the truth; LLMs, by contrast, work with probabilities, not binary correct/incorrect judgments. Based on a huge many-dimensional map of words created by processing huge amounts of text, LLMs decide which words (based on meaning and current context) would most likely follow from the words used in a prompt. They’re inherently more concerned with sounding truthy than delivering a factually correct response, the researchers conclude.

“ChatGPT and other LLMs produce bullshit, in the sense that they don’t care about the truth of their outputs,” Hicks said in an email to Fast Company. “Thinking of LLMs this way provides a more accurate way of thinking about what they are doing, and thereby allows consumers and regulators to better understand why they often get things wrong.”

AI Versus AI
By Michael Klare

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”).

Initially, JADC2 will be designed to coordinate combat operations among “conventional” or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagon’s nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. “JADC2 and NC3 are intertwined,” General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, “NC3 has to inform JADC2 and JADC2 has to inform NC3.”

Though this may seem an extreme scenario, it’s entirely possible that opposing AI systems could trigger a catastrophic “flash war” — the military equivalent of a “flash crash” on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous “Flash Crash” of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock market’s value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, “the military equivalent of such crises” on Wall Street would arise when the automated command systems of opposing forces “become trapped in a cascade of escalating engagements.” In such a situation, he noted, “autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.”

At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures.

The Man Who Saved the World by Doing Absolutely Nothing
By Megan Garber

It was September 26, 1983. Stanislav Petrov, a lieutenant colonel in the Soviet Air Defence Forces, was on duty at Serpukhov-15, a secret bunker outside Moscow. His job: to monitor Oko, the Soviet Union’s early-warning system for nuclear attack. And then to pass along any alerts to his superiors. It was just after midnight when the alarm bells began sounding. One of the system’s satellites had detected that the United States had launched five ballistic missiles. And they were heading toward the USSR. Electronic maps flashed; bells screamed; reports streamed in. A back-lit red screen flashed the word ‘LAUNCH.’”

You’re alone in a bunker, and alarms are screaming, and lights are flashing, and you have your training, and you have your intuition, and you have two choices: follow protocol or trust your gut. Either way, the world is counting on you to make the right call.

Petrov trusted himself. He reported the satellite’s detection to his superiors—but, crucially, as a false alarm. And then, as Wired puts it, “he hoped to hell he was right.”

He was, of course. The U.S. had not attacked the Soviets. It was a false alarm.

Stanislav Petrov, Soviet Officer Who Helped Avert Nuclear War, Is Dead at 77
By Sewell Chan

Colonel Petrov attributed his judgment to both his training and his intuition. He had been told that a nuclear first strike by the Americans would come in the form of an overwhelming onslaught.

“When people start a war, they don’t start it with only five missiles,” he told The Post.

The false alarm was apparently set off when the satellite mistook the sun’s reflection off the tops of clouds for a missile launch. The computer program that was supposed to filter out such information had to be rewritten.

Colonel Petrov said the system had been rushed into service in response to the United States’ introduction of a similar system. He said he knew it was not 100 percent reliable.

A Million Mistakes a Second
By Paul Scharre

Humans have already ceded control to machines in certain military domains. At least 30 countries—with Israel, Russia, and the United States leading the pack—employ human-supervised autonomous weapons to defend bases, vehicles, and ships. These weapons systems, such as the ship-based Aegis combat system, can detect incoming rockets and missiles and, if human supervisors do nothing, respond on their own by firing to eliminate the threat. Such automated responses allow the systems to defend against what are known as saturation attacks, in which salvos of missiles or rockets are launched at a target with such little notice that they could overwhelm human operators.

For the time being, autonomous weapons such as these are used purely to protect human-occupied installations or vehicles. Humans supervise the weapons’ operation in real time and can intervene if necessary. Future autonomous weapons could lack these safeguards, however.

Real-world accidents with existing highly automated weapons point to these dangers. During the initial invasion of Iraq in 2003, the U.S. Army’s Patriot air defense system accidentally shot down two friendly aircraft, killing three allied service members. The first fratricide was due to a confluence of factors: a known flaw that caused the radar to mischaracterize a descending plane as a missile, outdated equipment, and human error.

The second blue-on-blue incident was due to a situation that had never arisen before. In the hectic march to Baghdad, Patriot operators deployed their radars in a nonstandard configuration likely resulting in electromagnetic interference between the radars that caused a “ghost track”—a signal on the radars of a missile that wasn’t there. The missile battery was in automatic mode and fired on the ghost track, and no one overruled it. A U.S. Navy F-18 fighter jet just happened to be in the wrong place at the wrong time. Both incidents were flukes caused by unique circumstances—but also statistically inevitable ones. Coalition aircraft flew 41,000 sorties in the initial phases of the Iraq War, and with more than 60 allied Patriot batteries in the area, there were millions of possible interactions, seriously raising the risk for even low-probability accidents.

Will A.I. Kill Meaningless Jobs?
By Emma Goldberg

There is a long tradition in the corporate world of clocking in only to wonder: What’s the point?

In 2013, the now deceased radical anthropologist, David Graeber, gave the world a distinct way to think about this problem in an essay called “On the Phenomenon of Bullshit Jobs.” This anticapitalist polemic by the man who had helped coin Occupy Wall Street’s iconic “99 percent” slogan went viral, seemingly speaking to a widely felt 21st Century frustration. Mr. Graeber developed it into a book that delved deeper on the subject.

He suggested that the economist John Maynard Keynes’s dream of a 15-hour workweek had never come to pass because humans have invented millions of jobs so useless that even the people doing them can’t justify their existence. A quarter of the work force in rich countries sees their jobs as potentially pointless, according to a study by the Dutch economists Robert Dur and Max van Lent. If workers find the labor dispiriting, and the work adds nothing to society, what’s the argument for keeping these jobs?

Robots are adept at pattern recognition, which means they excel at applying the same solution to a problem over and over: churning out copy, reviewing legal documents, translating between languages. When humans do something ad nauseam, their eyes might glaze over, they slip up; chatbots don’t experience ennui.

These tasks tend to overlap with some of those discussed in Mr. Graeber’s book. He identified categories of useless work including “flunkies,” who are paid to make rich and important people look more rich and important; “goons,” who are hired into positions that exist only because competitor companies created similar roles; and “box tickers,” which are, admittedly, subjective. Some economists, trying to make the designation more useful, have sharpened it: jobs that workers themselves find useless, and which produce work that could evaporate tomorrow with no real effect on the world.

But whether or not these jobs provide a sense of existential purpose, they do provide reliable salaries.

“Even if we take Graeber’s view of those jobs, we should be concerned about eliminating them,” said Simon Johnson, an economist at M.I.T. “This is the hollowing out of the middle class.”

Soon a Robot Will Be Writing This Headline
By Alana Semuels

Fearing that a newfangled technology would put them out of work, neighbors broke into the house of James Hargreaves, the inventor of the spinning jenny, and destroyed the machine and also his furniture in 18th-century England. Queen Elizabeth I denied an English priest a patent for an invention that knitted wool, arguing that it would turn her subjects into unemployed beggars. A city council dictated that Anton Möller, who invented the ribbon loom in the 16th century, should be strangled for his efforts.

But centuries of predictions that machines would put humans out of work for good — a scenario that economists call “technological unemployment” — have always turned out to be wrong. Technology eliminated some jobs, but new work arose, and it was often less grueling or dangerous than the old. Machines may have replaced weavers, but yesterday’s would-be weavers are now working jobs their forefathers couldn’t have imagined, as marketing managers and computer programmers and fashion designers. Over the past few centuries, technology has helped human workers become more productive than ever, ushering in unprecedented economic prosperity and raising living standards. The American economy, for instance, grew 15,241-fold between 1700 and 2000.

But now, Susskind argues, people working at the frontiers of artificial intelligence are teaching machines to draw on vast amounts of processing power and data to solve problems in ways humans couldn’t. Thus an IBM system beat Garry Kasparov in chess not by copying his strategy, but by drawing on a database of 330 million moves in a second, and picking which ones had the highest likelihood of beating him. Future machines like this one “will open up peaks in capability well beyond the reach of even the most competent human beings alive today,” he writes.

Susskind’s thorough examples are compelling evidence that this could be a scenario in the future. A machine at Stanford can draw on a database of nearly 130,000 cases to tell whether a freckle is cancerous. A Google program can diagnose more than 50 eye diseases with an error rate better than that of many clinical experts. A Chinese insurance company uses algorithms to read facial expressions and determine whether loan applicants are being dishonest. Listeners at the University of Oregon could not tell the difference between a Bach composition and one written by a computer program.

Where all humans could once sell their skills and services, some types of this human capital are losing value, and may soon become as worthless as Confederate money after the Civil War. Instead, prosperity is accruing to the owners of traditional capital, who have access to stocks and real estate and technology, and to people who have certain hard-to-find skills. Already, prosperity is being shared among fewer people: In 1964, AT&T, the most valuable company in the United States, had 758,611 employees; Microsoft, the most valuable company in the United States until it was recently unseated by Amazon, employs just 144,000. Instagram had just 13 employees when Facebook acquired it for $1 billion in 2012.

What 2,000 years of Chinese history reveals about today’s AI-driven technology panic – and the future of inequality
By Peng Zhou

Technology is humanity’s cheat code to break free from scarcity. The Han dynasty’s iron plough didn’t just till soil; it doubled crop yields, enriching landlords and swelling tax coffers for emperors while – initially, at least – leaving peasants further behind. Similarly, Britain’s steam engine didn’t just spin cotton; it built coal barons and factory slums. Today, AI isn’t just automating tasks; it’s creating trillion-dollar tech fiefdoms while destroying myriads of routine jobs.

Technology amplifies productivity by doing more with less. Over centuries, these gains compound, raising economic output and increasing incomes and lifespans. But each innovation reshapes who holds power, who gets rich – and who gets left behind.

If imperial China’s inequality saga was written in rice and rebellions, Britain’s industrial revolution featured steam and strikes. In Lancashire’s “satanic mills”, steam engines and mechanised looms created industrialists so rich that their fortunes dwarfed small nations.

The workers? They inhaled soot, lived in slums – and staged Europe’s first symbolic protest when the Luddites began smashing their looms in 1811.

During the 19th century, Britain’s richest 1% hoarded as much as 70% of the nation’s wealth, while labourers toiled 16-hour days in mills. In cities like Manchester, child workers earned pennies while industrialists built palaces.

But as inequality peaked in Britain, the backlash brewed. Trade unions formed (and became legal in 1824) to demand fair wages. Reforms such as the Factory Acts (1833–1878) banned child labour and capped working hours.

Slowly, the working class saw some improvement: real wages for Britain’s poorest workers gradually increased over the latter half of the 19th century, as mass production lowered the cost of goods and expanding factory employment provided a more stable livelihood than subsistence farming.

And then, two world wars flattened Britain’s elite – the Blitz didn’t discriminate between rich and poor neighbourhoods. When peace finally returned, the Beveridge Report gave rise to the welfare state: the NHS, social housing, and pensions.

Income inequality plummeted as a result. The top 1%’s share fell from 70% to 15% by 1979. While China’s inequality fell via dynastic collapse, Britain’s decline resulted from war-driven destruction, progressive taxation, and expansive social reforms.

However, from the 1980s onwards, inequality in Britain has begun to rise again. This new cycle of inequality has coincided with another technological revolution: the emergence of personal computers and information technology — innovations that fundamentally transformed how wealth was created and distributed.

The era was accelerated by deregulation, deindustrialisation and privatisation — policies associated with former prime minister Margaret Thatcher, that favoured capital over labour. Trade unions were weakened, income taxes on the highest earners were slashed, and financial markets were unleashed. Today, the richest 1% of UK adults own more 20% of the country’s total wealth.

The UK now appears to be in the worst of both worlds – wrestling with low growth and rising inequality.

While China’s growth-and-inequality cycles unfolded over millennia and Britain’s over centuries, America’s story is a fast-forward drama of cycles lasting mere decades. In the early 20th century, several waves of new technology widened the gap between rich and poor dramatically.

By 1929, as the world teetered on the edge of the Great Depression, John D. Rockefeller had amassed such a vast fortune – valued at roughly 1.5% of America’s entire GDP – that newspapers hailed him the world’s first billionaire. His wealth stemmed largely from pioneering petroleum and petrochemical ventures including Standard Oil, which dominated oil refining in an age when cars and mechanised transport were exploding in popularity.

Yet this period of unprecedented riches for a handful of magnates coincided with severe imbalances in the broader US economy. The “roaring Twenties” had boosted consumerism and stock speculation, but wage growth for many workers lagged behind skyrocketing corporate profits. By 1929, the top 1% of Americans owned more than a third of the nation’s income, creating a precariously narrow base of prosperity.

When the US stock market crashed in October 1929, it laid bare how vulnerable the system was to the fortunes of a tiny elite. Millions of everyday Americans – living without adequate savings or safeguards – faced immediate hardship, ushering in the Great Depression. Breadlines snaked through city streets, and banks collapsed under waves of withdrawals they could not meet.

In response, President Franklin D. Roosevelt’s New Deal reshaped American institutions. It introduced unemployment insurance, minimum wages, and public works programmes to support struggling workers, while progressive taxation – with top rates exceeding 90% during the second world war. Roosevelt declared: “The test of our progress is not whether we add more to the abundance of those who have much – it is whether we provide enough for those who have too little.”

In a different way to the UK, the second world war proved a great leveller for the US – generating millions of jobs and drawing women and minorities into industries they’d long been excluded from. After 1945, the GI Bill expanded education and home ownership for veterans, helping to build a robust middle class. Although access remained unequal, especially along racial lines, the era marked a shift toward the norm that prosperity should be shared.

Meanwhile, grassroots movements led by figures like Martin Luther King Jr. reshaped social norms about justice. In his lesser-quoted speeches, King warned that “a dream deferred is a dream denied” and launched the Poor People’s Campaign, which demanded jobs, healthcare and housing for all Americans. This narrowing of income distribution during the post-war era was dubbed the “Great Compression” – but it did not last.

As oil crises of the 1970s marked the end of the preceding cycle of inequality, another cycle began with the full-scale emergence of the third industrial revolution, powered by computers, digital networks and information technology.

As digitalisation transformed business models and labour markets, wealth flowed to those who owned the algorithms, patents and platforms – not those operating the machines. Hi-tech entrepreneurs and Wall Street financiers became the new oligarchs. Stock options replaced salaries as the true measure of success, and companies increasingly rewarded capital over labour.

By the 2000s, the wealth share of the richest 1% climbed to 30% in the US. The gap between the elite minority and working majority widened with every company stock market launch, hedge fund bonus and quarterly report tailored to shareholder returns.

But this wasn’t just a market phenomenon – it was institutionally engineered. The 1980s ushered in the age of (Ronald) Reaganomics, driven by the conviction that “government is not the solution to our problem; government is the problem”. Following this neoliberalist philosophy, taxes on high incomes were slashed, capital gains were shielded, and labour unions were weakened.

Deregulation gave Wall Street free rein to innovate and speculate, while public investment in housing, healthcare and education was curtailed. The consequences came to a head in 2008 when the US housing market collapsed and the financial system imploded.

The Global Financial Crisis that followed exposed the fragility of a deregulated economy built on credit bubbles and concentrated risk. Millions of people lost their homes and jobs, while banks were rescued with public money. It marked an economic rupture and a moral reckoning – proof that decades of pro-market policies had produced a system that privatised gain and socialised loss.

Inequality, long growing in the background, now became a glaring, undeniable fault line in American life – and it has remained that way ever since.

History’s empires collapsed when elites hoarded power; today’s fight over AI mirrors the same stakes. Will it become a tool for collective uplift like Britain’s post-war welfare state? Or a weapon of control akin to Han China’s grain-hoarding bureaucrats?

The answer hinges on who wins these political battles. In 19th-century Britain, factory owners bribed MPs to block child labour laws. Today, Big Tech spends billions lobbying to neuter AI regulation.

Meanwhile, grassroots movements like the Algorithmic Justice League demand bans on facial recognition in policing, echoing the Luddites who smashed looms not out of technophobia but to protest exploitation. The question is not if AI will be regulated but who will write the rules: corporate lobbyists or citizen coalitions.

The real threat has never been the technology itself, but the concentration of its spoils.

AI’s Future Doesn’t Have to Be Dystopian
By Daron Acemoglu

Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI’s full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms.

Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society.

In the decades following World War II, U.S. businesses operated in a broadly competitive environment. The biggest conglomerates of the early twentieth century had been broken up by Progressive Era reforms, and those that became dominant in the second half of the century, such as AT&T, faced similar antitrust action. This competitive environment produced a ferocious appetite for new technologies, including those that raised worker productivity.

These productivity enhancements created just the type of advantage firms were pining for in order to surge ahead of their rivals. Technology was not a gift from the heavens, of course. Businesses invested heavily in technology and they benefited from government support. It wasn’t just the eager investments in higher education during the Sputnik era (lest the United States fall behind the Soviet Union). It was also the government’s role as a funding source, major purchaser of new technologies, and director and coordinator for research efforts. Via funding from the National Science Foundation, the National Institutes of Health, research and development tax credits, and perhaps even more importantly the Department of Defense, the government imprinted its long-term perspective on many of the iconic technologies of the era, including the Internet, computers, nanotechnology, biotech, antibiotics, sensors, and aviation technologies.

We live in a very different world today. Wage growth since the late 1970s has been much slower than during the previous three decades. And this growth has been anything but shared. While wages for workers at the very top of the income distribution—those in the highest tenth percentile of earnings or those with postgraduate degrees—have continued to grow, workers with a high school diploma or less have seen their real earnings fall. Even college graduates have gone through lengthy periods of little real wage growth.

The erosion of the real value of the minimum wage, which has fallen by more than 30 percent since 1968, has been instrumental in the wage declines at the bottom of the distribution. With the disappearance of trade unions from much of the private sector, wages also lagged behind productivity growth. Simultaneously, the enormous increase in trade with China led to the closure of many businesses and large job losses in low-tech manufacturing industries such as textiles, apparel, furniture, and toys. Equally defining has been the new direction of technological progress. While in the four decades after World War II automation and new tasks contributing to labor demand went hand-in-hand, a very different technological tableau began in the 1980s—a lot more automation and a lot less of everything else.

Automation acted as the handmaiden of inequality. New technologies primarily automated the more routine tasks in clerical occupations and on factory floors. This meant the demand and wages of workers specializing in blue-collar jobs and some clerical functions declined. Meanwhile professionals in managerial, engineering, finance, consulting, and design occupations flourished—both because they were essential to the success of new technologies and because they benefited from the automation of tasks that complemented their own work. As automation gathered pace, wage gaps between the top and the bottom of the income distribution magnified.

Many experts now forecast that the majority of occupations will be fundamentally affected by AI in the decades to come. AI will also replace more skilled tasks, especially in accounting, finance, medical diagnosis, and mid-level management. Nevertheless, current AI applications are still primarily replacing relatively simple tasks performed by low-wage workers.

With AI-powered technologies already able to collect information about individual behavior, track communications, and recognize faces and voices, it is not far-fetched to imagine that many governments will be better positioned to control dissent and discourage opposition. But the effects of these technologies may well go beyond silencing governments’ most vocal critics. With the knowledge that such technologies are monitoring their every behavior, individuals will be discouraged from voicing criticism and may gradually reduce their participation in civic organizations and political activity. And with the increasing use of AI in military technologies, governments may be further empowered to act (even more) despotically toward their own citizens—as well as more aggressively toward external foes.

Individual dissent is the mainstay of democracy and social liberty, so these potential developments and uses of AI technology should alarm us all.

If we are going to redirect intelligent systems research, we first have to understand what determines the current direction of research. Who controls AI?

Of course, nobody single-handedly controls research, and nobody sets the direction of technological change. Nonetheless, compared to many other technological platforms—where we see support and leadership from different government agencies, academic researchers with diverse backgrounds and visions, and scores of research labs pushing in distinct directions—AI influence is concentrated in the hands of a few key players. A handful of tech giants, all focused on algorithmic automation—Google (Alphabet), Facebook, Amazon, Microsoft, Netflix, Ali Baba, and Baidu—account for the majority of money spent on AI research. (According to a recent McKinsey report, they are responsible for about $20 to $30 billion of the $26 to $39 billion in total private AI investment expenditures worldwide.) Government funding pales in comparison.

Government policy, funding, and leadership are critical.

Governments are the most important buyers of AI-based surveillance technologies. Even if it will be difficult to convince many security services to give up on these technologies, democratic oversight can force them to do so. As I already noted, government policy is also fueling the adoption and development of new automation technologies. For example, the U.S. tax code imposes tax rates around 25 percent on labor but less than 5 percent on equipment and software, effectively subsidizing corporations to install machinery and use software to automate work. Removing these distortionary incentives would go some way toward refocusing technological change away from automation. But it won’t be enough. We need a more active government role to support and coordinate research efforts toward the types of technologies that are most socially beneficial and that are most likely to be undersupplied by the market.

We may already be seeing the beginning of a social awakening. For example, NSO Group’s Pegasus technology grabbed headlines when it was used to hack of Amazon founder and owner Jeff Bezos’s phone, monitor Saudi dissidents, and surveil Mexican lawyers, UK-based human rights activists, and Moroccan journalists. The public took note. Public pressure forced Juliette Kayyem—a former Obama administration official, Harvard professor, and senior advisor to the NSO Group—to resign from her position with the spyware company and cancel a webinar on female journalist safety she planned to hold at Harvard. Public pressure also recently convinced IBM, Amazon, and Microsoft to temporarily stop selling facial recognition software to law enforcement because of evidence of these technologies’ racial and gender biases and their use in the tracking and deportation of immigrants. Such social action against prominent companies engaged in dubious practices, and the academics and experts working for them, is still the rare exception. But it can and should happen more often if we want to redirect our efforts toward better AI.

… democratic oversight and changes in societal norms are key for turning around the direction of AI research. But as AI technologies and other social trends weaken democracy, we may find ourselves trapped in a vicious circle. We need a rejuvenation of democracy to get out of our current predicament, but our democracy and tradition of civic action are already impaired and wounded.

Blame Economists for the Mess We’re In
By Binyamin Appelbaum

In the four decades between 1969 and 2008, economists played a leading role in slashing taxation of the wealthy and in curbing public investment. They supervised the deregulation of major sectors, including transportation and communications. They lionized big business, defending the concentration of corporate power, even as they demonized trade unions and opposed worker protections like minimum wage laws. Economists even persuaded policymakers to assign a dollar value to human life — around $10 million in 2019 — to assess whether regulations were worthwhile.

The revolution, like so many revolutions, went too far. Growth slowed and inequality soared, with devastating consequences. Perhaps the starkest measure of the failure of our economic policies is that the average American’s life expectancy is in decline, as inequalities of wealth have become inequalities of health. Life expectancy rose for the wealthiest 20 percent of Americans between 1980 and 2010. Over the same three decades, life expectancy declined for the poorest 20 percent of Americans. Shockingly, the difference in average life expectancy between poor and wealthy women widened from 3.9 years to 13.6 years.

Rising inequality also is straining the health of liberal democracy. The idea of “we the people” is fading because, in this era of yawning inequality, there is less we share in common. As a result, it is harder to build support for the kinds of policies necessary to deliver broad-based prosperity in the long term, like public investment in education and infrastructure.

Markets are constructed by people, for purposes chosen by people — and people can change the rules. It’s time to discard the judgment of economists that society should turn a blind eye to inequality. Reducing inequality should be a primary goal of public policy.

The market economy remains one of humankind’s most awesome inventions, a powerful machine for the creation of wealth. But the measure of a society is the quality of life throughout the pyramid, not just at the top, and a growing body of research shows that those born at the bottom today have less chance than in earlier generations to achieve prosperity or to contribute to society’s general welfare — even if they are rich by historical standards.

This is not just bad for those who suffer, although surely that is bad enough. It is bad for affluent Americans, too. When wealth is concentrated in the hands of the few, studies show, total consumption declines and investment lags. Corporations and wealthy households increasingly resemble Scrooge McDuck, sitting on piles of money they can’t use productively.

Willful indifference to the distribution of prosperity over the last half century is an important reason the very survival of liberal democracy is now being tested by nationalist demagogues.

How the mainstream abandoned universal economic principles
By Branko Milanovic

… the main principles of neoliberal globalization have been abandoned by the mainstream economists much prior to January 20th. The latter is only a symbolic event: on that day indeed will end the era of neoliberal globalization that has started (in this episode of globalization) with the fall of the Berlin Wall. Yet most of its elements have been dismantled much before and by the people who never openly acknowledged doing so.

From the very establishment of the Bretton Woods system and from the basic tenets of globalization, tariffs are sometimes considered a necessary evil but in principle the instrument that should be discouraged and used as rarely as possible. This has been the policy consistently pursued by both developed and developing countries from the early 1980s. The recent increases in tariff rates in the United States and Europe thus mark a departure from one of the main principles of globalization. Increased tariffs against Chinese imports started under the first administration of Donald Trump but very quickly were taken over by Joe Biden and by his administration. It moreover expanded the policy of tariff protection against Chinese goods and even in some cases threatened to ban the imports of some goods like electrical vehicles altogether.

It has also been a consistent approach of globalizers to argue against trade blocs. One need not go back to Hayek’s Road to Serfdom to find out that trade blocs are generally associated with militaristic or autarkic regimes that try to create zones of economic influence. But most recently that particular policy has found favor among the neoliberal establishment including the Financial Times’ own associate editor and columnist Rana Faroohar who published an influential, and extensively reviewed, book based on a number of her earlier writings and speeches. In it she argues in favor of returning the jobs that were apparently lost to China to the United States and in favor of so-called friend-shoring (see my views here). Friend-shoring is simply a different word for the creation of politically motivated trade blocs. It is a policy which in reality does not dare say its real name because it is the same policy like the ones followed in the 1930s by the UK with Commonwealth Preferences, Nazi Germany with Grosse Deutschland’s Central European area, or Japan with Co-prosperity zone. They are antithetical to any normal idea of what globalization should mean.

Economic coercion is likewise not accepted by liberal economists. However it has been used increasingly by the United States and Europe. Trump employed it quite a lot and increased the number of sanctions on political regimes that he did not like such as Cuba and Venezuela. These sanction regimes continued under Biden: the US currently has 38 different sanctions regimes that in one or another way affect more than fifty countries.

… most recently even the aspirational objective of free movement of labor has been jettisoned. This is not only Trump who has built the fence against Mexico. The fence continued being built under Biden. Likewise deportations on undocumented aliens continued under Biden as indeed they had under Obama. This is not something that Trump had invented on his own: the anti-immigration policy in the US has gradually hardened over the past 10 to 15 years. The same is true and even more dramatically so in the European Union. It theoretically prides itself on multiculturalism and multiethnicity whereas at the same time it is erecting physical borders in the limitrophe regions and has increased the anti-migrant patrols in the Mediterranean. It is in its own interest that the number of deaths due to such fencing and patrols is never revealed and can only be guessed. But it runs into several thousand per year.

The goals are no longer free movement of goods because tariffs stop them; movement of technology is limited because of the so-called security concerns; movement of capital is reduced because the Chinese (and most recently Japanese as In the case of US Steel) are often not allowed to buy American companies; movement of labor has been severely curtailed.

My point here is not to argue whether the abandonment of these principles is good for the United States or Europe or China or the world, or not. It is rather simply to show that it was not Trump who is the only agent of change, but that these principles have been in abeyance for at least a decade or perhaps a decade and a half.

“To the Finland Station”
By Branko Milanovic

Neoliberalism was not an ideology of blood and soil but it managed to kill many. It leaves the scene with a scent of falsehood and dishonesty. Not often has an ideology been so mendacious: it called for equality while generating historically unprecedented increases in inequality; it called for democracy while sowing anarchy, discord and chaos; it spoke against ruling classes while creating a new aristocracy of wealth and power; it called for rules while breaking them all; it funded a system of schooled mendacity that tried to erect half-lies as truths.

The Post-Neoliberal Imperative
By Jennifer M. Harris

At any given time, societies dwell within intellectual constraints. These paradigms determine understandings of how economies function and the values they should serve, and they help define what governments should and should not do. The power of these ideas rests in how they become so taken for granted that they persist for long periods without challenge. Beliefs such as “the king is divine,” for instance, ordered every aspect of political and economic life in many parts of the world for centuries.

These narratives are rarely right or wrong in a normative sense. They arise because they help solve pressing contemporaneous problems. As the problems evolve, however, these governing philosophies become less useful, making way for the next script. In many parts of the world, the radical belief in popular sovereignty undid the hold of the notion of the divine right of kings, just as expanding commerce created a wealthy merchant class eager for greater political say. In the nineteenth century, as the United States grappled with the Industrial Revolution and the imperatives of westward expansion, it relinquished mercantilism for the broad permission structure of laissez-faire capitalism to keep pace with galloping technological progress and to settle the country’s vast interior. That economic philosophy worked well enough until it didn’t; the Great Depression and the mobilization for World War II required different ideas, an opening that Keynesianism filled for another three decades.

By the late 1970s, society again had new problems, including social unrest, energy shocks, and stagflation, which ushered in an embrace of free-market neoliberalism. Neoliberalism can be broadly defined as a deep confidence in the capacity of markets to allocate capital and a corresponding skepticism of government’s ability to structure economies, with a tendency to favor deregulation, free trade, and unfettered movement of capital across borders. It spread across the West and, at least in the United States, performed well for a time—growth picked up, inflation came down. But eventually, neoliberal ideas, too, ran their course, proving unable to solve problems such as lagging growth and accelerating climate change, and creating and exacerbating others. From 1980 to the early 2020s, inequality in the United States soared, with the top one-thousandth of the population doubling its share of overall wealth, to around 14 percent. (The top one percent now holds roughly 30 percent of the country’s wealth.) Manufacturing shrank from 22 percent to nine percent of nonfarm employment during that period. And these same policies not only hastened the rise of a peer adversary—China—but also left the United States highly dependent on Chinese wares.

The emerging post-neoliberal consensus rests on two ideas, what can be called the imperatives to “balance” and to “build.” Those who believe that the U.S. economy needs greater “balance” observe that markets tend to concentrate imbalances in economic power. As asymmetries have grown—between the financial sector and the rest of the economy, between big corporations and their smaller competitors, and between China’s state-driven economy and the more market-oriented economies of its trading partners—they have left consumers with fewer options, and workers with lower pay. Most people now have less control over their economic lives. At Amazon fulfillment centers, which employ the majority of the company’s 1.5 million workers, vending machines are stocked not just with snacks but with painkillers. The grocery delivery firm Instacart recently unveiled an artificial-intelligence-enabled price-setting service for grocery stores, advertising the ability to set food prices for individual consumers based on their willingness to pay. In aggregate, such imbalances eventually stunt innovation and an economy’s growth. Post-neoliberals believe that it is the job of government to address these disparities, not least because they bleed into politics.

How to beat the backlash that threatens the liberal revolution
By Fareed Zakaria

Since 1945, the birth of the “liberal international order,” the world has experienced what John Lewis Gaddis has called the “long peace,” the longest stretch with no great power conflict in modern history. Since then, most nations have usually behaved abroad according to a set of shared rules, norms and values. There are now thousands of international agreements that govern the behavior of countries and many international organizations that create forums for discussion, debate and joint action.

Trade among these countries has exploded. Trade as a share of world economic output stood at about 30 percent in 1913, an era often thought of as a high point of peace and cooperation. Today it is about 60 percent.

The world of rivalry and realpolitik has been with us since time immemorial. The world of a rules-based international order is relatively new. Like so many liberal ideas, it emerged from the European enlightenment. Thinkers such as Hugo Grotius and Immanuel Kant began arguing for conceptions of national interest that veered away from war and toward “perpetual peace.” In the 19th century, British liberals adopted some of these ideas, and Britain at times began to act abroad to uphold its values and not simply its interests. For example, it not only abolished the slave trade, but also used its navy to block foreign slave ships. Yet it was only out of the ashes of World War II that an utterly dominant United States was able to conceive of a genuinely new international system and make it a reality.

That system — the United Nations, Bretton Woods, free trade, cooperation — emerged in the mid-1940s but was largely rejected by the Soviet Union, and so grew within a Western bubble. Until 1991, when Soviet Communism fell and the liberal order saw a fast and furious expansion to include dozens of countries from Eastern Europe to Latin America to Asia. President George H.W. Bush called it “a new world order.” But it was really the expansion of the existing Western order to encompass most of the world.

Free & Uneasy
By Richard V Reeves

In the chapters of recent history, the period from 1989 to 2009 could be titled the ‘Age of Liberal Hubris’. With communism defeated, democracy spreading and the market roaring, the victory of liberal values seemed assured. Freedom was in the air – not only for people but for markets and commerce too. Pretty much everyone was signing new trade agreements and opening up borders. Sure, China was an outlier, but the pundits confidently predicted that economic growth would eventually bring liberal democracy in its wake there too.

With his 1992 book The End of History and the Last Man, Francis Fukuyama became the intellectual poster boy of this late 20th-century liberal triumphalism. His central argument was that liberal democracy was becoming a global standard, the inevitable destination of political progress. Things haven’t quite worked out that way, making Fukuyama one of the most interesting public intellectuals around. Indeed, the recent decade has seen liberalism and democracy in full-scale retreat. To his credit, Fukuyama has been trying to figure out why, producing a fine pair of books on the historical origins and development of political order, as well as treatises on trust, biotechnology and identity. But his new book, Liberalism and Its Discontents, is his first full-throated defence of the liberal project since 1992.

In foreign policy, liberal hubris led to the extraordinary idea that democratic nations could be built on foundations of sand. It is to Fukuyama’s credit, then, that he was critical of the US invasion of Iraq. The war was the result in part of the fevered atmosphere created by 9/11 and in part of the geopolitical ambitions of Washington neo-cons. But it also reflected a general sense that the virtues of liberalism and democracy were so blindingly self-evident that the Iraqi people would rush to embrace them, even when they were delivered by the US military at gunpoint.

But the greater damage has been at home. The puncture wound in the puffed-up liberalism of the 2000s was the financial crash of 2008, which laid bare the economic precarity of the working class. In the United States, the wealth of many black families was simply wiped out. The lesson that liberals forgot is that market economies can generate huge economic inequalities, especially when they are opened up to international competition or mass immigration. The technocrats at the helm failed to see that while they and their friends were happily surfing the waves of change, many of their fellow citizens were being left behind. If they dared to complain, they were dismissed as reactionaries. The message to the working class was, effectively, get with the programme or get lost.

Bill Clinton, for example, pushed hard for free trade, but failed to deliver promised investments in training and education for dislocated workers. Like many liberals of his generation, he was a budget hawk, bowing to pressure from Wall Street and the Federal Reserve to curb spending. As Robert Reich, Clinton’s labour secretary, wrote, Fed chair Alan Greenspan had ‘Bill’s balls in the palm of his hand’. It is clear now that when most of the proceeds of economic growth are going to the liberals of the upper-middle class, liberalism becomes less popular among the masses. Inequality is a mess of liberals’ own making.

Fast-forward to 2016, and this neglect resulted in populist revolt. The populists placed themselves in direct opposition to the liberal internationalists – ‘globalists’ was the preferred term – and tapped into nationalistic sentiments. This is an old feud: nationalism and liberalism are natural enemies. That is why liberals prefer, like John Lennon, to imagine a world without countries. During the age of liberal hubris, they put their faith in transnational institutions that they hoped would gradually supplant nation-states. Fukuyama himself was an enthusiast for the European Union as a model (he’s less sanguine about it in the new book). But as he writes, ‘liberal theory has great difficulties drawing clear boundaries around its own community, and explaining what is owed to people inside and outside that boundary.’

The reason for this is that liberalism is founded on a claim of universal and equal human dignity. The liberal ideal of universalism is at odds with the political reality of nation-states. This is a genuine tension to be lived with and worked through rather than resolved. It should be clear by now that there is no supranational short cut: liberalism will have to be built one nation at a time. Meanwhile, the goal should be a peaceful pluralism among nations, liberal or not, based on a shared interest in peace but reinforced by international alliances sufficiently strong to act when this is not enough.

The Case for the Greater West
By Peter Slezkine

The North American colonies were a crucial component of the early British Empire, and the American Revolution did not sever these economic and cultural connections. During World War II, a deliberate transfer of power from London to Washington resulted in the latter becoming the unquestioned senior partner in an enduring special relationship. The other British settler colonies also found a place at the core of the American order. The United States signed the ANZUS mutual security treaty with Australia and New Zealand in 1951, and the Five Eyes intelligence sharing network, which includes Australia, Canada, New Zealand, and the United Kingdom, continues to circumscribe the United States’ most trusted inner circle.

For much of U.S. history, the Western Hemisphere represented the canonical extent of the United States’ influence. George Washington’s warning against foreign entanglements and James Monroe’s doctrine against European encroachment served as the twin pillars of a hemispheric foreign policy. In the late 19th century, U.S. administrations inaugurated a period of gunboat and dollar diplomacy in Latin America and the Caribbean. At the same time, they built substantial hemispheric ties under the framework of Pan-Americanism. This effort at institution-building reached its apogee with the signing of the 1947 Rio Pact, the first mutual security alliance ever joined by the United States. (NATO did not come into existence until 1949.) Although World War II and the ensuing Cold War put Latin America on Washington’s backburner, the United States’ comparative indifference to the region has concealed a continued insistence on hemispheric preeminence. Any possibility of an external great power establishing a foothold in Latin America has elicited an outsized reaction from Washington, none more dramatic than the Kennedy administration’s threat of global nuclear Armageddon during the 1962 Cuban missile crisis.

Aside from its ties with the Anglosphere, the Western Hemisphere, and Europe, the United States maintains a somewhat haphazard collection of allies and partners acquired in the aftermath of World War II. It was the first country to recognize Israel in 1948 and has been its principal champion since the late 1960s. The Philippines was an American colony until 1946 and has been a treaty ally since 1951. Japan surrendered to the United States in 1945 and signed a mutual security agreement with the country in 1951. South Korea concluded a mutual defense treaty with Washington following the 1953 Korean armistice. The United States made a defense pact with the Taiwan-based Republic of China in 1954, withdrew from the treaty after transferring official recognition to Beijing in 1979, and has maintained considerable—though ambiguous—support for Taipei ever since.

The logic of free-world leadership created a series of persistent problems for U.S. policymakers. The insistence on total opposition between communist and non-communist “worlds” resulted in a strategically unsustainable policy of global containment, an intolerance of non-alignment, and a reliance on the apparent persistence of an existential menace as the only force that could make the heterogeneous free world cohere.

Liberal internationalists feel the need to confront Beijing because the existence of an influential and imposing autocracy is incompatible with their vision.

The Mask of Imperialism
By Anatol Lieven

The moral arrogance to which liberal internationalism is prone was summed up in the famous words of the Democratic secretary of state Madeleine Albright (frequently quoted by President Biden): “If we have to use force, it is because we are America; we are the indispensable nation. We stand tall and we see further than other countries into the future.” This ideological framework led all too many American liberals to support the disastrous invasion of Iraq and the overthrow of the Libyan state—and to advocate for a dramatic U.S. intervention in Syria, which would have been catastrophic.

It shouldn’t take much intellectual clarity to see that this has nothing in common with anything that could honestly be called internationalism; it is in fact the very antithesis of internationalism. This is an expansionist version of American civic nationalism (euphemized in the United States as “exceptionalism”), closely related to the French Revolutionary nationalism that led French armies to export republican values by force, first to the rest of Europe and then to the colonies in Africa and Asia.

If there is only one “right side of history,” and only one path for human progress, then there is no point in studying other countries in any depth.

This feeds the Manichaean “you are with us or you are against us” strain in American culture; if America represents the only righteous path of human progress, its adversaries must be intrinsically evil. This can lead to the grotesque irony of self-described internationalists engaging in feral, chauvinist hatred of other peoples. Liberal internationalism of this kind also reinforces the dangerous ignorance that Daniel Ellsberg invoked when he remarked that at no point in American history, including when the Johnson Administration began to bomb Vietnam, could a senior official pass a freshman exam in a course on Vietnamese history or culture.

Almost 2,500 years ago, Thucydides described how Athenian democracy rivaled or even exceeded authoritarian Sparta in bellicosity, and how this eventually led Athens itself to defeat in the Peloponnesian War. The first lesson that liberal internationalists can learn from Thucydides is that, for anyone seriously committed to the well-being of humanity, international peace is not a goal to be achieved after all the world has adopted “democracy.” It is the essential basis for all cooperation among nations.

To pursue genuine internationalism, liberals also need to develop a degree of modesty about democracy itself. There is, after all, something inherently absurd about casting Trump as a would-be fascist dictator on the one hand, and on the other calling on nations to adopt the political system that elected him.

The Deep Familiarity of Donald Trump
By Daniel Treisman

Since the end of the Cold War, Western democracies have struggled to reconcile their ambitions with their capabilities. Leaders across the spectrum embraced globalization without cushioning its domestic shocks, expanded international commitments while slashing the funding for them, and celebrated democratic ideals even as they eroded public trust through contradiction and complacency.

Over time, these failures of leadership have reshaped life across the West, shifting risk onto ordinary citizens and triggering cascading crises. The costs have fallen the hardest on the least privileged—soldiers sent to unwinnable wars, workers displaced by trade and automation, and communities ravaged by economic and public health shocks.

The collapse of the Soviet Union created an extraordinary opportunity to reimagine the European order, consolidate peace, and minimize future risks. Instead, a generation of Western leaders embraced a strategy that did not add up, mortgaging their countries’ security to a vision that they then declined to adequately fund.

Leaders debated whether to integrate Russia into a common security framework—as West Germany had been after World War II—or to expand NATO and reinforce its military posture. Either option would have been bold and internally consistent.

Instead, the West chose contradiction: It expanded NATO while weakening its foundations. Between 1999 and 2020, NATO admitted 14 new members, including several on Russia’s border. Yet while its security guarantees grew, its defense capacity shrank. U.S. troop levels in Europe fell from more than 320,000 service members in the late 1980s to 70,000 by 2013—the year that the last American tank left the continent. Britain, France, and Germany made comparable cuts.

This wasn’t just a military miscalculation—it reflected a deeper political failure. In much of the West, NATO enlargement was an elite-driven project, pursued with little public debate. Leaders spoke of democracy promotion and stability but failed to explain to their citizens why they should bear the cost of defending faraway countries. Expansion remained politically viable only so long as it appeared cheap and low risk. So they presented it as such. When Moscow turned hostile, Western leaders hesitated again—unwilling to ask electorates to support serious deterrence after years of supposedly cost-free security.

This was not a question of resources. Over the same two decades, the U.S. spent more than $4 trillion on the wars in Iraq and Afghanistan—conflicts that became case studies in the syndrome of irresponsibility shaping recent governance. What began as focused missions intended to destroy terrorist camps or eliminate weapons of mass destruction devolved into open-ended occupations, marked by poor planning, mission creep, and political evasion. Besides costing hundreds of thousands of lives, they disrupted regional stability, empowered Iran, and undermined U.S. credibility abroad. At home in the United States, they corroded democratic norms through mass surveillance, indefinite detention, and an expanding national security state.

Apart from a few theatrical ideas—such as annexing Greenland or Canada—Trump’s foreign policy has reflected not innovation, but escalation. He inherited the post-Cold War playbook—unilateralism, overreach, evasion—and ran it with less discipline and greater disregard for consequences. His foreign policy is not new: It is a louder, riskier remix of what came before.

THE DECLINE OF AMERICAN SWAY
By Richard J. Barnet

”The Imperious Economy” offers a quite different analysis of what has eroded American power. Prof. David Calleo, a political scientist at the Johns Hopkins School of Advanced International Studies, in the end offers only a sketchy prescription for what he calls ”the American disease,” but 90 percent of his new book is a plausible and subtle diagnosis. Mr. Calleo, who has written extensively on the Atlantic Alliance from a politically independent perspective, here explains in effect why the American Century proclaimed by Henry Luce in 1941 lasted about 26 years. Mr. Calleo makes the sensible but often forgotten point that ”the slow transition from American hegemony to a more plural world is not, in itself, a defeat for American policy. On the contrary, it is precisely the outcome that might have been expected to follow from the policy itself.”

Between 1960 and 1970 alone, the U.S. share of world manufactured exports fell from 22.8 percent to 18.4 percent. This predictable trend was to a considerable extent the consequence of European and Japanese recovery, both explicit goals of American policy. The United States could not be prosperous if it was the only healthy market economy. Where would it sell its goods? An economically stagnant Europe and Japan would be vulnerable to Communism and Soviet influence. In its moment of supremacy, the United States was forced to diffuse its extraordinary power in the hope of securing long-term stability.

Having exported billions of dollars by means of foreign aid, private investment by the multinational corporations and enormous expenditures to maintain a military presence around the world, the United States, by President Nixon’s time, was unable to live within the system it had created. It had the power to change the rules, but it did not have either the vision or the power to design new rules that would produce the stable international order needed for long-term American prosperity.

One reason the long-term effects of excessive military spending have been minimized is that the United States has wrung economic concessions from the other industrial nations by virtue of being accepted as their military protector. But the concessions were never enough.

Mr. Calleo points out that Europe chose to listen to those American officials who preached economic integration but not to those who vainly called for ”a Europe wide open to American farm products.” While the United States kept building a ”nuclear umbrella” that promised less and less real security, Europe and Japan became formidable economic rivals.

The United States, now resenting its ”imperial burdens and handicaps,” gave up the pretension that it was acting in behalf of a ”Free World” community. It would look after its own interests.

”The Imperious Economy” is really an essay in the paradoxes of American power. We have been remarkably successful in getting our way in the world but not in achieving what we want. Disasters have played into our hands. When the Soviets moved militarily on Czechoslovakia or a war broke out in the Middle East, the United States’ role as protector was effectively dramatized, but the task of protection in the nuclear age remained as problematical as ever, and maintaining the illusions of deterrence grew ever more expensive.

THE oil and food crises of the mid-1970’s stemmed the relative decline of the United States vis-a-vis its allies within the industrial world because in the short run this nation was better able than they to sustain the shocks. But the United States has been unable to translate the advantages of its ”imperious economy” into stable growth and prosperity at home. It can still work its will with little fear of retaliation from its allies. The Reagan Administration’s high interest rates subvert European and Japanese economic health more than anything the Kremlin seems willing or able to do, but the allies, divided as they are, can do little more than protest. Similarly, the Administration’s heavy-handed pressure on the European allies to stop natural-gas pipeline deals with the Soviet Union has dramatized anew the high cost of American protection, even as the credibility of that protection is increasingly questioned. The United States is still No. 1 in an increasingly pluralistic international system, in which America’s new position represents, as Mr. Calleo puts it, ”not so much the decline of America as the revival of the world.”

But the fruits of hegemony are eluding the American people.

Requiem for an Empire
By Alfred McCoy

After recovering from the 2008 financial crisis, the U.S. was on track for a decade of dynamic growth — the auto industry saved, oil and gas production booming, the tech sector thriving, the stock market soaring, and employment solid. Internationally, Washington was the world’s preeminent leader, with an unchallenged military, formidable diplomatic clout, unchecked economic globalization, and its democratic governance still the global norm.

Looking forward, leading historians of empire agreed that America would remain the world’s sole superpower for the foreseeable future. Writing in the Financial Times in 2002, for instance, Yale professor Paul Kennedy, author of a widely read book on imperial decline, argued that “America’s array of force is staggering,” with a mix of economic, diplomatic, and technological dominance that made it the globe’s “single superpower” without peer in the entire history of the world. Russia’s defense budget had “collapsed” and its economy was “less than that of the Netherlands.” Should China’s high growth rates continue for another 30 years, it “might be a serious challenger to U.S. predominance” — but that wouldn’t be true until 2032, if then. While America’s “unipolar moment” would surely not “continue for centuries,” its end, he predicted, “seems a long way off for now.”

Writing in a similar vein in the New York Times in February 2010, Piers Brendon, a historian of Britain’s imperial decline, dismissed the “doom mongers” who “conjure with Roman and British analogies in order to trace the decay of American hegemony.” While Rome was riven by “internecine strife” and Britain ran its empire on a shoestring budget, the U.S. was “constitutionally stable” with “an enormous industrial base.” Taking a few “relatively simple steps,” he concluded, Washington should be able to overcome current budgetary problems and perpetuate its global power indefinitely.

By 2010, economic globalization was cutting good-paying factory jobs here, income inequality was widening, and corporate bailouts were booming — all essential ingredients for rising working-class resentment and deepening domestic divisions. Foolhardy military misadventures in Iraq and Afghanistan, pushed by Washington elites trying to deny any sense of decline, stoked simmering anger among ordinary Americans, slowly discrediting the very idea of international commitments. And the erosion of America’s relative economic strength from half the world’s output in 1950 to a quarter in 2010 meant the wherewithal for its unipolar power was fading fast.

Only a “near-peer” competitor was needed to turn that attenuating U.S. global hegemony into accelerating imperial decline. With rapid economic growth, a vast population, and the world’s longest imperial tradition, China seemed primed to become just such a country. But back then, Washington’s foreign policy elites thought not and even admitted China to the World Trade Organization (WTO), fully confident, according to two Beltway insiders, that “U.S. power and hegemony could readily mold China to the United States’ liking.”

At the close of the Cold War in 1991, Washington became the planet’s sole superpower, using its hegemony to forcefully promote a wide-open global economy — forming the World Trade Organization in 1995, pressing open-market “reforms” on developing economies, and knocking down tariff barriers worldwide. It also built a global communications grid by laying 700,000 miles of fiber-optic submarine cables and then launching 1,300 satellites (now 4,700).

By exploiting that very globalized economy, however, China’s industrial output soared to $3.2 trillion by 2016, surpassing both the U.S. and Japan, while simultaneously eliminating 2.4 million American jobs between 1999 and 2011, ensuring the closure of factories in countless towns across the South and Midwest. By fraying social safety nets while eroding protection for labor unions and local businesses in both the U.S. and Europe, globalization reduced the quality of life for many, while creating inequality on a staggering scale and stoking a working-class reaction that would crest in a global wave of angry populism.

In the globalized world America made, there is now an intimate interaction between domestic and international policy. That will soon be apparent in a second Trump administration whose policies are likely to simultaneously damage the country’s economy and further degrade Washington’s world leadership.

Trump Does Not Know How to Run an Empire
By Robert D. Kaplan

Empires at their best encourage cosmopolitanism, that is, a knowledge of other languages and cultures required for the maintenance of good diplomatic and security relations.

USAID, through its projects often run by non-governmental organizations, has been for decades doing much more than running humanitarian programs throughout the developing world. In fact, these programs don’t operate in the abstract: Because they are on-the-ground operations often in far-flung areas of a given country, they build vital human connections that are money in the bank for diplomats and military people to utilize, especially during crisis situations where local contacts are essential. An empire is about more than guns and money, it is also about the maintenance of relationships built up on official and non-official levels throughout the world by way of, among other things, humanitarian projects. Trump has been rightly concerned about the rise of Chinese power around the world, but has seemingly not realized that China is itself spreading its influence in large part through development projects. Dismantling our humanitarian projects in places like Africa and South America leaves a vast opening for the Chinese to fill with projects of their own.

The most long-lasting world powers and empires succeeded not by raw power but by various methods of persuasion: the more subtle the approach, the more longevity for the great power involved.

Exporting the American Model: Markets and Democracy
By Chalmers Johnson

There is something absurd and inherently false about one country trying to impose its system of government or its economic institutions on another. Such an enterprise amounts to a dictionary definition of imperialism. When what’s at issue is “democracy,” you have the fallacy of using the end to justify the means (making war on those to be democratized), and in the process the leaders of the missionary country are invariably infected with the sins of hubris, racism, and arrogance.

We Americans have long been guilty of these crimes. On the eve of our entry into World War I, William Jennings Bryan, President Woodrow Wilson’s first secretary of state, described the United States as “the supreme moral factor in the world’s progress and the accepted arbiter of the world’s disputes.”

The political philosopher Hannah Arendt once argued that democracy is such an abused concept we should dismiss as a charlatan anyone who uses it in serious discourse without first clarifying what he or she means by it. Therefore, let me indicate what I mean by democracy. First, the acceptance within a society of the principle that public opinion matters.

Second, there must be some internal balance of power or separation of powers, so that it is impossible for an individual leader to become a dictator. If power is concentrated in a single position and its occupant claims to be beyond legal restraints, as is true today with our president, then democracy becomes attenuated or only pro forma. In particular, I look for the existence and practice of administrative law — in other words, an independent, constitutional court with powers to declare null and void laws that contravene democratic safeguards.

Third, there must be some agreed-upon procedure for getting rid of unsatisfactory leaders. Periodic elections, parliamentary votes of no confidence, term limits, and impeachment are various well-known ways to do this, but the emphasis should be on shared institutions.

The United States holds the unenviable record of having helped install and then supported such dictators as the Shah of Iran, General Suharto in Indonesia, Fulgencio Batista in Cuba, Anastasio Somoza in Nicaragua, Augusto Pinochet in Chile, and Sese Seko Mobutu in Congo-Zaire, not to mention a series of American-backed militarists in Vietnam and Cambodia until we were finally expelled from Indochina. In addition, we ran among the most extensive international terrorist operations in history against Cuba and Nicaragua because their struggles for national independence produced outcomes that we did not like.

On the other hand, democracy did develop in some important cases as a result of opposition to our interference — for example, after the collapse of the CIA-installed Greek colonels in 1974; in both Portugal in 1974 and Spain in 1975 after the end of the U.S.-supported fascist dictatorships; after the overthrow of Ferdinand Marcos in the Philippines in 1986; following the ouster of General Chun Doo Hwan in South Korea in 1987; and following the ending of thirty-eight years of martial law on the island of Taiwan in the same year.

Washington on the Rocks
By Alfred McCoy and Brett Reilly

Why would the CIA risk controversy in 1965, at the height of the Cold War, by overthrowing an accepted leader like Sukarno in Indonesia or encouraging the assassination of the Catholic autocrat Ngo Dinh Diem in Saigon in 1963? The answer — and thanks to WikiLeaks and the “Arab spring,” this is now so much clearer — is that both were Washington’s chosen subordinates until each became insubordinate and expendable.

Why, half a century later, would Washington betray its stated democratic principles by backing Egyptian President Hosni Mubarak against millions of demonstrators and then, when he faltered, use its leverage to replace him, at least initially with his intelligence chief Omar Suleiman, a man best known for running Cairo’s torture chambers (and lending them out to Washington)? The answer again: because both were reliable subordinates who had long served Washington’s interests well in this key Arab state.

To understand the importance of local elites, look back to the Cold War’s early days when a desperate White House was searching for something, anything that could halt the seemingly unstoppable spread of what Washington saw as anti-American and pro-communist sentiment. In December 1954, the National Security Council (NSC) met in the White House to stake out a strategy that could tame the powerful nationalist forces of change then sweeping the globe.

Across Asia and Africa, a half-dozen European empires that had guaranteed global order for more than a century were giving way to 100 new nations, many — as Washington saw it — susceptible to “communist subversion.” In Latin America, there were stirrings of leftist opposition to the region’s growing urban poverty and rural landlessness.

After a review of the “threats” facing the U.S. in Latin America, influential Treasury Secretary George Humphrey informed his NSC colleagues that they should “stop talking so much about democracy” and instead “support dictatorships of the right if their policies are pro-American.” At that moment with a flash of strategic insight, Dwight Eisenhower interrupted to observe that Humphrey was, in effect, saying, “They’re OK if they’re our s.o.b.’s.”

It was a moment to remember, for the President of the United States had just articulated with crystalline clarity the system of global dominion that Washington would implement for the next 50 years — setting aside democratic principles for a tough realpolitik policy of backing any reliable leader willing to support the U.S., thereby building a worldwide network of national (and often nationalist) leaders who would, in a pinch, put Washington’s needs above local ones.

When civilian presidents proved insubordinate, the Central Intelligence Agency went to work, promoting coups that would install reliable military successors –replacing Iranian Prime Minister Mohammad Mossadeq, who tried to nationalize his country’s oil, with General Fazlollah Zahedi (and then the young Shah) in 1953; President Sukarno with General Suharto in Indonesia during the next decade; and of course President Salvador Allende with General Augusto Pinochet in Chile in 1973, to name just three such moments.

A New History of World War II
By Daniel Immerwahr

What was the Second World War about? According to Allied leaders, that wasn’t a hard question. “This is a fight between a free world and a slave world,” U.S. Vice President Henry Wallace explained. It is “between Nazidom and democracy,” Winston Churchill said, with “tyranny” on one side and “liberal, peaceful” powers on the other.

Would that it were so simple. The Allies’ inclusion of the Soviet Union—“a dictatorship as absolute as any dictatorship in the world,” Franklin D. Roosevelt once called it—muddied the waters. But the other chief Allies weren’t exactly liberal democracies, either. Britain, France, the Netherlands, Belgium, the United States, and (depending on how you view Tibet and Mongolia) China were all empires. Together, they held, by my count, more than 600 million people—more than a quarter of the world—in colonial bondage.

This fact wasn’t incidental; empire was central to the causes and course of the war. Yet the colonial dimensions of World War II aren’t usually stressed. The most popular books and films present it as Churchill did, as a dramatic confrontation between liberty-loving nations and merciless tyrants. In the United States, it’s remembered still as the “good war,” the vanquishing of evil by the Greatest Generation.

That understanding works—sort of—when war stories focus on Adolf Hitler’s invasions of sovereign states in Europe. It falters, however, when they center on the Pacific. There, the Japanese targeted colonies, seizing them under the banner of “Asia for the Asiatics.” The Allies beat Japan back, but only to return Burma to the British and Indonesia to the Dutch—Asia for the Europeans.

What impelled Germany, Japan, and Italy on their conquering missions?

The 19th century had seen a “veritable steeplechase for colonial acquisitions,” as Italy’s foreign ministry described it. Britain won that race, with other countries that would eventually join the Allies taking secondary prizes. The Axis powers, late out of the gate, got the leftovers. Worse, the winners locked the losers out, rebuffing Japan’s attempts to join the great powers’ club and stripping Germany of its meager overseas holdings after World War I. Going into the 1930s, the Allies held 15 times more colonial acreage than the Axis states did.

Japan, Germany, and Italy were rising economies without large empires. Was that a problem? Today, it wouldn’t be; 21st-century countries don’t require colonies to prosper. But different rules applied in the first half of the 20th century. Then, industrial powers depended on raw materials from far-off lands. And without colonies, they had every reason to worry about ready availability. Hitler never forgot the World War I blockade that largely cut Germany off from such materials as rubber and nitrates and caused widespread hunger. The global Depression, which shrunk international trade by two-thirds from 1929 to 1932, threatened a new form of blockade.

As cross-border trade collapsed, rich countries subsisted off whatever was within their borders. The British and French could lean on their empires. But the Germans? They were a “people without space,” as the title of a popular novel had it. Hence Hitler’s fixation on Lebensraum and the parallel Italian search for spazio vitale—both terms translate as “living space.” The Japanese complained of “ABCD encirclement,” meaning that their access to such vital resources as oil and rubber was hemmed in by the Americans, British, Chinese, and Dutch.

The British, French, and United States preferred peace because they were satisfied with the status quo. “We have got most of the world already, or the best parts of it,” observed the head of Britain’s navy in 1934. “We only want to keep what we have got and prevent others from taking it away from us.” The Japanese, Germans, and Italians, by contrast, sought a violent redivision of the spoils.

“What India was for England the spaces of the East will be for us,” Hitler once remarked. Shifting analogies, he also noted that Germans should “look upon the natives as Redskins.” If Germany couldn’t easily reach distant territories in Asia or Africa, it could carve colonial space out of Eastern Europe.

The aim of these land grabs was resources, and the Axis states plundered their conquered territories. Millions of Asians starved as Japan impounded food—the Indonesians and Vietnamese both suffered famines. Germany plundered, too, targeting Jews but not limiting its depredations to them. Its scheme to feed itself with confiscated Soviet grain, the unfathomably cruel “Hunger Plan,” was carried out with the understanding that, if successful, it might kill 30 million.

Food Weaponization Makes a Deadly Comeback
By Zach Helder, Mike Espy, Dan Glickman, Mike Johanns and Devry Boughner Vorwerk

In 1974, U.S. Secretary of Agriculture Earl Butz made a bold and now infamous pronouncement to Time magazine: “Food is a weapon. It is now one of the principal tools in our negotiating kit.” In the context of the Cold War, Butz viewed American agricultural abundance as an instrument of coercion that Washington could wield in the Third World: food aid and trade in exchange for political concessions. The same Time article observed, “This may be a brutal policy, . . . but Washington may feel no obligation to help countries that consistently and strongly opposed it.” Butz drew on an intuition as old as the agricultural revolution, that food bestows control to those who possess it and renders vulnerable those who do not. To exploit that vulnerability—by, say, laying siege to and starving an enemy’s civilian population—is to weaponize food.

Food is indeed a weapon, and Butz was hardly the first public official to state it so plainly. Throughout the Russian Civil War, from 1917 to 1922, Bolshevik leaders were obsessed with the acquisition and distribution of grain. Famines across eastern Europe—at first an unintended consequence of civil war and societal collapse—offered the Bolsheviks such leverage over domestic opposition that they even contemplated rejecting food aid from the American Relief Administration, explicitly telling the Americans that “food is a weapon.” During World War II, the relative strength of U.S. food production was critical to the Allied war effort, so much so that the U.S. Office of War Information promoted food rationing with a striking slogan: “Food is a weapon. Don’t waste it!”

Before World War II, the effects of food weaponization were local in scope, as food security was largely a function of domestic or regional food supplies. But as regional food systems became woven into an interdependent global system, Butz envisioned something even grander: American dominance of the global food trade as a tool of economic and political warfare. He failed to foresee that global interdependence made the targeting of individual states infeasible.

The United States first tested Butz’s proposition in 1980, imposing a grain embargo on the Soviet Union. The plan failed: Moscow swiftly found alternative suppliers, and the Carter administration incurred fierce domestic political blowback. But the American experiment with what some then called “the food weapon” offered a grim lesson: food trade restrictions could have dangerous and unpredictable consequences. It became evident that a liberal democracy leading an international order had no use for such an imprecise weapon, which was as likely to harm one’s allies and domestic constituencies as it was to harm one’s intended target.

Economic sanctions can also function as a form of food weaponization, intentionally or not. Western countries imposing sanctions against Russia took pains to safeguard the food supply, yet food markets were nonetheless affected because of a phenomenon called “overcompliance,” or the tendency of private firms to play it excessively safe under uncertain sanctions rules.

Israel, Gaza, and the Starvation Weapon
By Boyd van Dijk

In early March, as its cease-fire with Hamas began to unravel, Israel again turned to a tactic it had used earlier in the war in Gaza: imposing a total blockade on the territory, including a cutoff of all deliveries of food, medicine, fuel, and electricity. The aim, according to Israeli cabinet officials, was to make life unbearable for Gaza’s two million citizens to force Hamas to accept Israeli demands in talks on a cease-fire extension. On social media, Finance Minister Bezalel Smotrich, echoing statements by National Security Minister Itamar Ben Gvir, defended the government’s decision to “completely halt” the flow of humanitarian aid, calling it a way to open the “gates of hell . . . as quickly and deadly as possible.” This was not an isolated remark; Smotrich had previously suggested that blocking aid to Gaza was justified even at the cost of mass civilian starvation. Seven weeks into the new siege, as the UN World Food Program announced that border closings had caused all of its food stocks in Gaza to run out, Moshe Saada, a Knesset member from Prime Minister Benjamin Netanyahu’s Likud Party, told Israel’s Channel 14 TV that that was the intention: “Yes, I will starve the residents of Gaza, yes, this is our obligation,” Saada said.

During World War I, blockade planners in both Germany and Britain regarded civilian populations as the backbone of modern militaries: in a total war, they reasoned, cutting off food imports for enemy civilians was not only permissible but also perhaps necessary. Thus, beginning in 1914, the United Kingdom imposed a naval blockade on all the Central Powers that eventually resulted in hundreds of thousands of deaths. Indeed, so terrifyingly effective was the tactic, known in German as the Hungerblockade, that both victors and vanquished viewed it as a war-winning weapon that had caused the societal collapse of Germany and Austria-Hungary in 1918.

In World War II, starvation campaigns became, if anything, even more important, and both the Allied and Axis powers explicitly acknowledged that their aim was to kill enemy civilians. As part of its all-out war against Japan, for example, the United States launched “Operation Starvation,” a submarine and air blockade cutting off food and raw materials. After the war, the Nazis were held accountable for civilian starvation actions at the Nuremberg tribunals, though such measures were sub­sumed under meta-crimes such as extermination. For the victorious Allied parties, there was no accountability at all.

After 1949, blockading powers were unable to formally reclaim the right to deliberately starve civilians as a lawful weapon of war. Nonetheless, they were able to create significant carve outs, in which nonintentional civilian death could be considered legally tolerable under specific circumstances. The United States, for example, used starvation tactics on a large scale in the Vietnam War, with its systematic destruction of crops in areas suspected of harboring communist guerrillas. Within the emerging framework, a government at war could claim that starving enemy combatants remained legal and that as a result, incidental civilian deaths were tragic but nearly unavoidable outcomes of a legitimate method of modern warfare against totalitarian enemies.

Deliberate starvation has persisted as a military strategy not just because it is cheap, simple, and brutally effective but also because it is so difficult to punish.

The Unpunished: How Extremists Took Over Israel
By Ronen Bergman and Mark Mazzetti

The long arc of harassment, assault and murder of Palestinians by Jewish settlers is twinned with a shadow history, one of silence, avoidance and abetment by Israeli officials. For many of those officials, it is Palestinian terrorism that most threatens Israel. But in interviews with more than 100 people — current and former officers of the Israeli military, the National Israeli Police and the Shin Bet domestic security service; high-ranking Israeli political officials, including four former prime ministers; Palestinian leaders and activists; Israeli human rights lawyers; American officials charged with supporting the Israeli-Palestinian partnership — we found a different and perhaps even more destabilizing threat. A long history of crime without punishment, many of those officials now say, threatens not only Palestinians living in the occupied territories but also the State of Israel itself.

The violence and impunity that these cases demonstrate existed long before Oct. 7. In nearly every month before October, the rate of violent incidents was higher than during the same month in the previous year. And Yesh Din, an Israeli human rights group, looking at more than 1,600 cases of settler violence in the West Bank between 2005 and 2023, found that just 3 percent ended in a conviction. Ami Ayalon, the head of Shin Bet from 1996 to 2000 — speaking out now because of his concern about Israel’s systemic failure to enforce the law — says this singular lack of consequences reflects the indifference of the Israeli leadership going back years. “The cabinet, the prime minister,” he says, “they signal to the Shin Bet that if a Jew is killed, that’s terrible. If an Arab is killed, that’s not good, but it’s not the end of the world.”

Ayalon’s assessment was echoed by many other officials we interviewed. Mark Schwartz, a retired American three-star general, was the top military official working at the United States Embassy in Jerusalem from 2019 to 2021, overseeing international support efforts for the partnership between Israel and the Palestinian Authority. “There’s no accountability,” he says now of the long history of settler crimes and heavy-handed Israeli operations in the West Bank. “These things eat away at trust and ultimately the stability and security of Israel and the Palestinian territories. It’s undeniable.”

In 2022, just 18 months after losing the prime ministership, Benjamin Netanyahu regained power by forming an alliance with ultraright leaders of both the Religious Zionism Party and the Jewish Power party. It was an act of political desperation on Netanyahu’s part, and it ushered into power some truly radical figures, people — like Smotrich and Itamar Ben-Gvir — who had spent decades pledging to wrest the West Bank and Gaza from Arab hands. Just two months earlier, according to news reports at the time, Netanyahu refused to share a stage with Ben-Gvir, who had been convicted multiple times for supporting terrorist organizations and, in front of television cameras in 1995, vaguely threatened the life of Rabin, who was murdered weeks later by an Israeli student named Yigal Amir.

Now Ben-Gvir was Israel’s national security minister and Smotrich was Israel’s finance minister, charged additionally with overseeing much of the Israeli government’s activities in the West Bank. In December 2022, a day before the new government was sworn in, Netanyahu issued a list of goals and priorities for his new cabinet, including a clear statement that the nationalistic ideology of his new allies was now the government’s guiding star. “The Jewish people,” it said, “have an exclusive and inalienable right to all parts of the land of Israel.”

The expansion of the settlements had long been an irritant in Israel’s relationship with the United States, with American officials spending years dutifully warning Netanyahu both in public and in private meetings about his support for the enterprise. But the election of Donald Trump in 2016 ended all that. His new administration’s Israel policy was led mostly by his son-in-law, Jared Kushner, who had a long personal relationship with Netanyahu, a friend of his father’s who had stayed at their family home in New Jersey.

Ehud Olmert, the former Israeli prime minister, said he believes that many members of the ultraright in Israel “want war.” They “want intifada,” he says, “because it is the ultimate proof that there is no way of making peace with the Palestinians and there is only one way forward — to destroy them.”

Jared Kushner Says Israel Should “Finish the Job” in Gaza So It Can Focus on Building Valuable “Waterfront Property”
By Bess Levin

In 2017, on the eve of his inauguration, Donald Trump made a bold and unintentionally hilarious prediction: His son-in-law, Jared Kushner, was going to bring peace to the Middle East. “If [Jared] can’t produce peace in the Middle East, nobody can,” Trump said, adding, “All my life, I’ve been hearing that’s the toughest deal to make, but I have a feeling Jared is going to do a great job.” Unfortunately, Jared did not do a great job, and failed to even come close to bringing peace to the region, despite reading “25 books” on the topic.

The Story Behind Jared Kushner’s Curious Acceptance Into Harvard
By Daniel Golden

I would like to express my gratitude to Jared Kushner for reviving interest in my 2006 book, “The Price of Admission.” I have never met or spoken with him, and it’s rare in this life to find such a selfless benefactor. Of course, I doubt he became Donald Trump’s son-in-law and consigliere merely to boost my lagging sales, but still, I’m thankful.

My book exposed a grubby secret of American higher education: that the rich buy their under-achieving children’s way into elite universities with massive, tax-deductible donations. It reported that New Jersey real estate developer Charles Kushner had pledged $2.5 million to Harvard University in 1998, not long before his son Jared was admitted to the prestigious Ivy League school. At the time, Harvard accepted about one of every nine applicants. (Nowadays, it only takes one out of twenty.)

I also quoted administrators at Jared’s high school, who described him as a less than stellar student and expressed dismay at Harvard’s decision.

“There was no way anybody in the administrative office of the school thought he would on the merits get into Harvard,” a former official at The Frisch School in Paramus, New Jersey, told me. “His GPA did not warrant it, his SAT scores did not warrant it. We thought for sure, there was no way this was going to happen. Then, lo and behold, Jared was accepted. It was a little bit disappointing because there were at the time other kids we thought should really get in on the merits, and they did not.”

How Gaza Shattered the West’s Mythology
By Pankaj Mishra

On April 19, 1943, a few hundred young Jews in the Warsaw Ghetto took up whatever arms they could find and struck back at their Nazi persecutors. Most Jews in the ghetto had already been deported to extermination camps. The fighters were, as one of their leaders Marek Edelman recalled, seeking to salvage some dignity: “All it was about, finally, was our not letting them slaughter us when our turn came. It was only a choice as to the manner of dying.”

After a few desperate weeks, the resisters were overwhelmed. Most of them were killed. Some of those still alive on the last day of the uprising committed suicide in the command bunker as the Nazis pumped gas into it; only a few managed to escape through sewer pipes. German soldiers then burned the ghetto, block by block, using flamethrowers to smoke out the survivors.

The Polish poet Czeslaw Milosz later recalled hearing screams from the ghetto “on a beautiful quiet night, a country night in the outskirts of Warsaw”:

This screaming gave us goose pimples. They were the screams of thousands of people being murdered. It traveled through the silent spaces of the city from among a red glow of fires, under indifferent stars, into the benevolent silence of gardens in which plants laboriously emitted oxygen, the air was fragrant, and a man felt that it was good to be alive. There was something particularly cruel in this peace of the night, whose beauty and human crime struck the heart simultaneously. We did not look each other in the eye.

In a poem Milosz wrote in occupied Warsaw, “Campo dei Fiori,” he evokes the merry-go-round next to the ghetto’s wall, on which riders move skyward through the smoke of corpses, and whose jaunty tune drowns out the cries of agony and despair. Living in Berkeley, California, while the U.S. military bombed and killed hundreds of thousands of Vietnamese, an atrocity he compared to the crimes of Adolf Hitler and Joseph Stalin, Milosz again knew shameful complicity in extreme barbarity. “If we are capable of compassion and at the same time are powerless,” he wrote, “then we live in a state of desperate exasperation.”

Israel’s annihilation of Gaza, provisioned by Western democracies, inflicted this psychic ordeal for months on millions of people—involuntary witnesses to an act of political evil, who allowed themselves to occasionally think that it was good to be alive, and then heard the screams of a mother watching her daughter burn to death in yet another school bombed by Israel.

For decades now, the Shoah has set the standard of human evil. The extent to which people identify it as such and promise to do everything in their power to combat antisemitism serves, in the West, as the measure of their civilization. But many consciences were perverted or deadened over the years European Jewry was obliterated. Much of gentile Europe joined, often zealously, in the Nazi assault on Jews, and the news even of their mass murder was met with skepticism and indifference in the West, especially the United States. Reports of atrocities against Jews, George Orwell recorded as late as February 1944, bounced off consciousnesses “like peas off a steel helmet.” Western leaders declined to admit large numbers of Jewish refugees for years after the revelation of Nazi crimes. Afterwards, Jewish suffering was ignored and suppressed. Meanwhile, West Germany, though far from being de-Nazified, received cheap absolution from Western powers while being enlisted into the Cold War against Soviet communism.

It would have been easy for Western leaders to withhold unconditional support to an extremist regime in Israel while also acknowledging the necessity of pursuing and bringing to justice those guilty of war crimes on Oct. 7. Why then did Biden repeatedly claim to have seen atrocity videos that do not exist? Why did British Prime Minister Keir Starmer, a former human rights lawyer, assert that Israel “has the right” to withhold power and water from Palestinians, and punish those in the Labour Party calling for a cease-fire? Why did Jürgen Habermas, the eloquent champion of the Western Enlightenment, leap to the defense of avowed ethnic cleansers?

What made the Atlantic, one of the oldest periodicals in the United States, publish an article arguing, after the murder of nearly 8,000 children in Gaza, that “it is possible to kill children legally”? What explains the recourse to the passive voice in the mainstream Western media while reporting Israeli atrocities, which made it harder to see who is doing what to whom, and under what circumstances (“The lonely death of Gaza man with Down’s syndrome” read the headline of a BBC report on Israeli soldiers unleashing an attack dog on a disabled Palestinian)? Why did U.S. billionaires help spur pitiless crackdowns on protesters on college campuses? Why were academics and journalists sacked, artists and thinkers de-platformed, and young people barred from jobs for appearing to defy a pro-Israel consensus? Why did the West, while defending and sheltering Ukrainians from a venomous assault, so pointedly exclude Palestinians from the community of human obligation and responsibility?

On the Hypocrisies and Violent Legacies of British Imperialism
By Caroline Elkins

After the Great War, League of Nations members divided up the former German and Ottoman empires. The Treaty of Versailles called the war’s territorial spoils “mandates” that were “inhabited by peoples not yet able to stand by themselves under the strenuous conditions of the modern world,” and it consecrated “the principle that the well-being and development of such peoples form a sacred trust of civilisation.”

The Second World War signaled the mandate system’s demise, though the Versailles Treaty’s emphasis on “peoples not yet able to stand by themselves” survived. To secure an Anglo-American alliance, Britain maneuvered through American anti-imperialist demands by replacing outdated ideas with new reform efforts. Trusteeship became “partnership,” and Britain granted more local political participation in the empire.

But universal rights and self-rule were still not in the offing to an empire that had carried Britain through the war and guaranteed the tiny island nation’s claims to Big Three status when the fighting was over. Churchill met Franklin Roosevelt and Joseph Stalin in the winter of 1945 on Crimea’s frigid shores, where peacetime talks veered away from Britain’s empire. A few months later delegates from fifty Allied nations gathered in San Francisco to hash out the UN charter.

Despite vociferous protests from non-Western nations and interest groups, the international organization’s founding document affirmed imperialism’s place in the new world order.

In theory, closer partnership with the empire would rescue Britain’s embattled economy and ensure its superpower status, even if it meant boots on the ground. Abiding tensions between universal rights and racial differences exploded. The empire began to unravel, but the practice and language of liberal reform was always part of imperial conflicts.

States of emergency were not wars but campaigns for British subjects’ “hearts and minds.” So-called terrorists and their supporters could be reformed. Behind barbed wires of detention, civics and home craft lessons would liberate them, as would forced labor’s sweat and toil and torture’s unfathomable pain. Britain had a new name for this, too. No longer called the “moral effect,” it was now “rehabilitation.” The empire’s lexicon also had other expressions for it. “Mwiteithia Niateithagio”—“Abandon hope all ye who enter here”—hung over the entrance to one of Kenya’s most notorious detention camps.

Turning to history’s balance sheet to determine which European empire was more or less brutal than others can be an invidious exercise. Historians usher objective data to make their case: body counts, number of soldiers on the ground, official reports. But all evidence is subjective, particularly that which is mediated through state bureaucracies.

Truth telling can’t be found in numbers either. Governments routinely massaged them, under- or overreporting their counts to suit political needs.

What we do know is that all empires were violent. Coercion was central to initial acts of conquest and to the maintenance of rule over nonconsenting populations. How this violence was systematized, enacted, and understood varied. For instance, there was nothing reformist about Belgian king Leopold II’s bloody rule in the Congo. He took control of a massive swath of Africa’s interior in 1885 and for two decades enabled concession companies to force Africans to labor, using horrific methods, in order to extract rubber.

In South West Africa, Germany’s military descended into “dysfunctional extremes of violence,” nearly wiping out the Herero and Nama peoples, as the historian Isabel V. Hull tells us. Chancellor Otto von Bismarck’s constitution isolated the army from external oversight and critique, and its militarism snowballed in Germany’s empire and informed fascism’s advance. Arendt called this the “boomerang effect,” and it was not isolated to Hitler’s rise. She looked at Europe’s race thinking and “wild murdering” and “terrible massacres” in the colonies and saw in them the origins of European totalitarianism. “When the European mob discovered what a ‘lovely virtue’ a white skin could be in Africa, when the English conqueror in India became an administrator,” Arendt wrote, “convinced of his own innate capacity to rule and dominate, when the dragon-slayers turned into either ‘white men’ of ‘higher breeds’ or into bureaucrats and spies… the stage seemed to be set for all possible horrors. Lying under anybody’s nose were many of the elements which gathered together could create a totalitarian government on the basis of racism.”

The legacies Britain’s empire left behind have had significant bearings on a quarter of the world’s landmass where nations were born out of a cauldron of violence. It is a world where the colonial state rarely had the recognition of legitimate sovereignty from its subjects. Instead, the social contract was forced upon them, and the nationalism and independent states that colonial subjects created would go on to include many of liberalism’s contradictions.

Britain Can No Longer Hide Behind the Myth That Its Empire Was Benign
By Caroline Elkins

During the empire’s heyday, British officials were obsessed with the “rule of law,” claiming this was the basis of good government. But good government in empire was liberalism’s fever dream. Its rule of law codified difference, curtailed freedoms, expropriated land and property, and ensured a steady stream of labor for the mines and plantations, the proceeds from which helped fuel Britain’s economy.

After Britain waged some 250 wars in the nineteenth century to “pacify” colonial subjects, violent conflicts—big and small— were recurring as colonial officials imposed and maintained British sovereignty over populations who ostensibly never had it. When the colonized demanded basic rights over their own bodies and freedoms, British officials often criminalized them, cast their actions—including vandalism, labor strikes, riots, and full-blown insurgencies—as political threats, and invested police forces and the military with legally conferred powers for repression. To justify these measures, Britain deployed its developmentalist framework, pointing to the “moral effect” of violence, a necessary element for reforming unruly “natives.”

Uncovering the brutal truth about the British empire
By Marc Parry

The Mau Mau uprising had long fascinated scholars. It was an armed rebellion launched by the Kikuyu, who had lost land during colonisation. Its adherents mounted gruesome attacks on white settlers and fellow Kikuyu who collaborated with the British administration. Colonial authorities portrayed Mau Mau as a descent into savagery, turning its fighters into “the face of international terrorism in the 1950s”, as one scholar puts it.

The British, declaring a state of emergency in October 1952, proceeded to attack the movement along two tracks. They waged a forest war against 20,000 Mau Mau fighters, and, with African allies, also targeted a bigger civilian enemy: roughly 1.5 million Kikuyu thought to have proclaimed their allegiance to the Mau Mau campaign for land and freedom. That fight took place in a system of detention camps.

The British destroyed documents in Kenya – scholars knew that. But for years clues had existed that Britain had also expatriated colonial records that were considered too sensitive to be left in the hands of successor governments. Kenyan officials had sniffed this trail soon after the country gained its independence. In 1967, they wrote to Britain’s Foreign Office asking for the return of the “stolen papers”. The response? Blatant dishonesty, writes David M Anderson, a University of Warwick historian and author of Histories of the Hanged, a highly regarded book about the Mau Mau war.

Internally, British officials acknowledged that more than 1,500 files, encompassing over 100 linear feet of storage, had been flown from Kenya to London in 1963, according to documents reviewed by Anderson. Yet they conveyed none of this in their official reply to the Kenyans. “They were simply told that no such collection of Kenyan documents existed, and that the British had removed nothing that they were not entitled to take with them in December 1963,” Anderson writes. The stonewalling continued as Kenyan officials made more inquiries in 1974 and 1981, when Kenya’s chief archivist dispatched officials to London to search for what he called the “migrated archives”. This delegation was “systematically and deliberately misled in its meetings with British diplomats and archivists,” Anderson writes in a History Workshop Journal article, Guilty Secrets: Deceit, Denial and the Discovery of Kenya’s ‘Migrated Archive’.

The turning point came in 2010, when Anderson, now serving as an expert witness in the Mau Mau case, submitted a statement to the court that referred directly to the 1,500 files spirited out of Kenya. Under legal pressure, the government finally acknowledged that the records had been stashed at a high-security storage facility that the Foreign Office shared with the intelligence agencies MI5 and MI6. It also revealed a bigger secret. This same repository, Hanslope Park, held files removed from a total of 37 former colonies.

In court, lawyers representing the British government tried to have the Mau Mau case tossed out. They argued that Britain could not be held responsible because liability for any colonial abuses had devolved to the Kenyan government upon independence. But the presiding judge, Richard McCombe, dismissed the government’s bid to dodge responsibility as “dishonourable”.

The British government, defeated repeatedly in court, moved to settle the Mau Mau case. On 6 June 2013, the foreign secretary, William Hague, read a statement in parliament announcing an unprecedented agreement to compensate 5,228 Kenyans who were tortured and abused during the insurrection. Each would receive about £3,800. “The British government recognises that Kenyans were subject to torture and other forms of ill-treatment at the hands of the colonial administration,” Hague said. Britain “sincerely regrets that these abuses took place.” The settlement, in Anderson’s view, marked a “profound” rewriting of history. It was the first time Britain had admitted carrying out torture anywhere in its former empire.

Worldwide Devastation
By Howard W. French

In 2005, Britain’s then–Labour chancellor of the exchequer, Gordon Brown, chose the backdrop of Tanzania to make a dramatic statement about his nation’s unmatched record of imperial conquest and rule. “The time is long gone,” he said, “when Britain needs to apologize for its colonial history.”

Mirroring 19th-century historians’ and politicians’ polished encomiums to a beneficent British Empire, the speech brought elite assessments of Britain’s unparalleled dominion over one quarter of the globe, and over a similar fraction of the human population, almost full circle. Back in the 19th century, the task of ruling over myriad darker-skinned peoples around the world had been depicted less as a matter of self-interest than of moral obligation. It was Britain’s unique vocation to spread progressive constitutional freedoms and the rule of law, along with free trade and free labor, among the less fortunate barbarians. As the Whig politician and historian Thomas Babington Macaulay wrote of Britain’s empire: “It is to her peculiar glory, not that she has ruled so widely—not that she has conquered so splendidly—but that she has ruled only to bless, and conquered only to spare.”

The famous quote by Sir John Seeley, about Britain’s having acquired its empire in a fit of absentmindedness, doesn’t even begin to capture the full scope and spirit of the denialism that persisted among famous scholars at Oxford and Cambridge well into the 20th century. In 1914, for example, the historian H.E. Egerton, the first occupant of the Beit Chair of Colonial History at Oxford, wrote that British power in Asia and Africa had come about due to the passively worded “downfall of the Moghul empire” and “the breaking up of the native tribal system and the resulting anarchy,” respectively. Much later, the distinguished historian Christopher Bayly would write that the unstated purpose of the Cambridge History of the British Empire, first published in 1929, “was to demonstrate how the English values of ‘justice,’ ‘benevolence,’ and ‘humanity’ were transformed into a universal ethos of free nations through the operation of ‘the rule of law and democratic government’” under British rule.

Such views remained fairly unchallenged until the 1960s, when space for more critical, revisionist accounts of the British Empire began to open up. Most famous among these were the histories by Jack Gallagher and Ronald Robinson, whose co-authored essay “The Imperialism of Free Trade” helped launch the so-called Cambridge School of historiography, which argued that Britain had profited from empire through trade while avoiding extensive formal control over its colonies. According to the historian Richard Drayton, these new accounts stripped the traditional emphasis of “high moral purpose” from the British narrative, replacing it with a franker acknowledgment of self-interest and realpolitik.

With Legacy of Violence, Caroline Elkins has stepped firmly into this arena—or, rather, reentered it—offering a sweeping and detailed history of the violence and brutality of the British Empire.

For all of the bluster and proclaimed moral certainty of British politicians, Elkins argues, much of Britain’s zeal in clinging to its control over others, even as the Age of Empire seemed increasingly destined to end, was driven not by self-confidence but rather by insecurity over the rapid rise of rival Western powers. It was global empire alone that, in this view, had prevented England from becoming, say, just another Sweden. “There are not wanting those who say that in this Jubilee year our Empire has reached the heights of its glory and power, and that now we shall begin to decline, as Babylon, Carthage, Rome declined,” she quotes Churchill as saying in one 1897 exhortation. “Do not believe these croakers but give the lie to their dismal croaking by showing by our actions that the vigor and vitality of our race is unimpaired and that our determination is to uphold the Empire that we have inherited from our fathers as Englishmen.” As Elkins makes clear throughout Legacy of Violence, the racialized aspect of empire—meaning clear notions of Britain’s Anglo-Saxon superiority over its Black, brown, and so-called yellow subjects—has been present from the beginning.

Although liberalism has promised virtuous-sounding ideals like freedom, modernity, reformism, and the rule of law, it has freely used these ideals, time and again, as justifications for wreaking devastation on the subject peoples caught in its grip.

This self-serving hypocrisy will be familiar to those American readers who recall an infamous episode from the Vietnam War, when after the bloody Battle of Bến Tre, which left hundreds of civilians dead and thousands of homes destroyed, a US Army major explained to Associated Press reporter Peter Arnett, “It became necessary to destroy the town to save it.” But for Elkins, it also demonstrates how liberalism’s willingness to ride roughshod over others has a long imperial pedigree that can be traced through a series of shockingly violent wars that are seldom recalled outside of the nations or regions in which they occurred.

Why Nemesis Is at Our Door
By Chalmers Johnson

If we choose to keep our empire, as the Roman republic did, we will certainly lose our democracy and grimly await the eventual blowback that imperialism generates. There is an alternative, however. We could, like the British Empire after World War II, keep our democracy by giving up our empire. The British did not do a particularly brilliant job of liquidating their empire and there were several clear cases where British imperialists defied their nation’s commitment to democracy in order to hang on to foreign privileges. The war against the Kikuyu in Kenya in the 1950s and the Anglo-French-Israeli invasion of Egypt in 1956 are particularly savage examples of that. But the overall thrust of postwar British history is clear: the people of the British Isles chose democracy over imperialism.

In her book The Origins of Totalitarianism, the political philosopher Hannah Arendt offered the following summary of British imperialism and its fate:

“On the whole it was a failure because of the dichotomy between the nation-state’s legal principles and the methods needed to oppress other people permanently. This failure was neither necessary nor due to ignorance or incompetence. British imperialists knew very well that ‘administrative massacres’ could keep India in bondage, but they also knew that public opinion at home would not stand for such measures. Imperialism could have been a success if the nation-state had been willing to pay the price, to commit suicide and transform itself into a tyranny. It is one of the glories of Europe, and especially of Great Britain, that she preferred to liquidate the empire.”

As a form of government, imperialism does not seek or require the consent of the governed. It is a pure form of tyranny. The American attempt to combine domestic democracy with such tyrannical control over foreigners is hopelessly contradictory and hypocritical. A country can be democratic or it can be imperialistic, but it cannot be both.

The World’s Four Legacy Empires Going Down
By Alfred McCoy

Since the Cold War ended in 1990, four legacy empires — those of China, France, Russia, and the United States — have exercised an undue influence over almost every aspect of international affairs. From the soft power of fashion, food, and sports to the hard power of arms, trade, and technology, those four powers have, each in its own way, helped to set the global agenda for the past 35 years. By dominating vast foreign territories, both militarily and economically, they have also enjoyed extraordinary wealth and a standard of living that’s been the envy of the rest of the world. If they now give way in a collective version of collapse, instead of one succeeding another, we may come to know a new world order whose shape is as yet unimaginable.

As a comparatively small state essentially devoid of natural resources, France won its global power through the sort of sheer ruthlessness — cutthroat covert operations, gritty military interventions, and cunning financial manipulations — that the three larger empires are better able to mask with the aura of their awesome power.

For 60 years after its formal decolonization of northern Africa in 1960, France used every possible diplomatic device, overt and covert, fair and foul, to incorporate 14 African nations into a neo-colonial imperium covering a quarter of Africa that critics called Françafrique.

From Paris’s perspective, the aim of the game was the procurement of cut-rate commodities — minerals, oil, and uranium — critical for its industrial economy.

By 2020, however, nationalist consciousness against repeated transgressions of their sovereignty was rising in many of those relatively new countries, putting pressure on French forces to withdraw.

While France’s African imperium was driven by economic imperatives, the revival of Russia’s empire, starting early in this century, has been all about geopolitics. During the last years of the Cold War, from 1989 to 1991, the Soviet Union collapsed, with Moscow losing an empire of seven Eastern European satellite states and 15 “republics” that would become 22 free-market democratic nations.

If Senator John McCain was right when, in 2014, he called Russia “a gas station masquerading as a country,” then the rapid switch to alternative energy across Eurasia could, within a decade, rob Moscow of the finances for further adventures, reducing Russia, now also harried by economic sanctions, to a distinctly secondary regional power.

For the past 30 years, China’s transformation from a poor peasant society into an urban industrial powerhouse has been the single most dramatic development in modern history. Indeed, its relentless rise as the planet’s top industrial power has given it both international economic influence and formidable military power, exemplified by a trillion-dollar global development program and the world’s largest navy. Unlike the other empires of our era that have expanded via overseas bases and military intervention, China has only acted militarily on contiguous territory — invading Tibet in the 1950s, claiming the South China Sea during the past decade, and endlessly maneuvering (ever more militarily) to subdue Taiwan. Had China’s unprecedented annual growth rate continued for another five years, Beijing might well have attained the means to become the globe’s preeminent power.

But there are ample signs that its economic juggernaut may have reached its limits under a Communist command-economy. Indeed, it now appears that, in clamping an ever-tighter grip on Chinese society by pervasive surveillance, the Communist Party may be crippling the creativity of its talented citizenry.

After decades of rip-roaring growth, its gross domestic product, which peaked at 13%, has recently slumped to 4.6%. Adding to its invisible economic crisis, by 2022 the country’s 31 provinces had shouldered crippling public debts that, the New York Times reported, reached an extraordinary “$9.5 trillion, equivalent to half the country’s economy,” and some 20 major cities have since leaped into the abyss by spending wildly to give the economy a pulse. Seeking markets beyond its flagging domestic economy, China, which already accounted for 60% of global electric vehicle purchases, is launching a massive export drive for its cut-rate electric cars which is about to crash headlong into rising tariff walls globally.

Xi’s Imperial Ambitions Are Rooted in China’s History
By Michael Sobolik

From the Qin dynasty’s founding in 221 B.C. to the Qing’s collapse in 1912 A.D., China’s sovereign territory expanded by a factor of four. What began as a small nation bound in the fertile crescent of the Yangtze and Yellow rivers morphed into an imperial wrecking ball.

To be sure, China was not the aggressor in every war it fought. In antiquity, nomadic tribes regularly raided China’s proto-dynasties. During the infamous Opium Wars of the 19th century, Western imperialist powers victimized and preyed upon China at gunpoint. The Chinese Communist Party (CCP) regularly refers to China’s “Century of Humiliation,” when European empires brutalized China and killed or wounded tens of thousands of Chinese men, women, and children. Indeed, the party has memorialized these grievances in a permanent exhibit of the National Museum of China, just steps away from Tiananmen Square.

For all of Beijing’s legitimate and long-standing security concerns, however, the sheer scope of China’s expansion is undeniable. Western leaders often deny or ignore it, usually at the behest and prodding of Chinese leaders. When Nixon finally gained an audience with Mao Zedong, he reassured the chairman, “We know China doesn’t threaten the territory of the United States.” Mao quickly corrected him: “Neither do we threaten Japan or South Korea.” To which Nixon added, “Nor any country.” Within the decade, Beijing invaded Vietnam.

At the time, Nixon’s gambit was to split the Soviet bloc and drive a wedge between the Soviet Union and the People’s Republic of China (PRC). Nixon and Kissinger saw the Sino-Soviet split and took stock of the PRC’s trajectory: a growing population that, once harnessed, was poised to dominate the global economy. It was textbook realpolitik: cold, dispassionate tactics divorced from moralism. If Washington could turn the Soviet Union’s junior partner, the West could significantly hamper Moscow’s ability to project power into Eastern Europe and Southeast Asia.

During the final years of Nixon’s life, his presidential speechwriter William Safire asked him about that fateful trip to Beijing in 1972. Had opening up to the PRC made Americans safer and China freer? According to Safire, “That old realist, who had played the China card to exploit the split in the Communist world, replied with some sadness that he was not as hopeful as he had once been: ‘We may have created a Frankenstein.’”

When Trade Wars Become Shooting Wars
By Allison Carnegie

… Beijing pushed to join the WTO from the moment it was created, hoping the body could guarantee Chinese manufacturers predictable global access. This effort sparked fierce debate within the United States about whether to permit accession. Advocates of integration argued that tying China’s economy to the world’s would deter China from launching military conflicts, lest it risk a cutoff, and encourage political liberalization. Opponents feared that economic integration before political liberalization would only strengthen an authoritarian competitor. Ultimately, the optimists prevailed: Washington allowed Beijing to join the WTO in 2001. The organization then alleviated China’s hold-up problems by prohibiting the United States from threatening tariff hikes each year. The Chinese economy, already expanding at a healthy rate, began to grow even faster.

For the WTO, Chinese accession was a triumph. The organization was created to increase trade everywhere and stop countries from using commerce as a weapon, and integrating the world’s most populous country (and a former U.S. adversary) suggested the body was having its intended effect. And at the time, it was—countries typically obeyed the WTO’s common rules, listened to its adjudicators, and played along with its enforcement tribunals. The resulting system was hardly perfect; it failed, for example, to stop China from using industrial policy as a means to promote specific sectors or companies, often at the expense of foreign firms, or from restricting exports to countries that criticized Beijing. But for the most part, the WTO was quite successful. Trade flourished, and the global economy grew more quickly than it otherwise would have.

The Dark History of How China Captured Apple
By Issie Lapowsky

In May 2016, Tim Cook and two other top Apple officials arrived at the headquarters of the Chinese Communist Party in central Beijing to strike an agreement with the Chinese government. At the time, Donald Trump was still running for president, campaigning on an anti-China platform and a promise to get Apple “to build their damn computers and things in this country!”

But Cook was in Beijing that day to do the opposite: to impress upon President Xi Jinping’s government that Apple was so committed to China that it planned to spend $275 billion in the country over the next five years. “I call it a Marshall Plan for China, because I could not find any corporate spending coming close to what Apple was spending,” said Financial Times journalist Patrick McGee, who writes about this and other moments illustrating Apple’s role in enabling China’s rise in his new book Apple in China: The Capture of the World’s Greatest Company.

The $275 billion figure (which was previously reported by The Information) came from internal documents McGee obtained as part of his reporting. To put that staggering sum in context, McGee said, the Marshall Plan—the United States’ post–World War II investment in Europe, which is considered to be among the greatest nation-building exercises of all time and which reestablished the global order for decades to come—was about half that, adjusted for inflation.

You write about the sheer size of the investment that Apple ultimately made in China—$55 billion every year for five years, adding up to $275 billion. You compare that to what Congress allocated in the CHIPS and Science Act: $52 billion. That’s $3 billion shy of what Apple was spending every year in China.

Isn’t that crazy?

It’s nuts.

That’s from an internal document. And that $55 billion is not counting the components. I’m not adding up the costs of aluminum and chips. This is exclusively the cost that stays in China. It’s basically the employee training costs, the wages of those employees, construction of Apple stores, and the billions upon billions of dollars of machinery that Apple puts on suppliers’ production lines. It tags them as being for Apple use only. They’re not supposed to be used for other devices, but “supposed to” is the keyword there.

You describe Apple as “sleepwalking” into this situation. Did they really not see what they were setting themselves up for?

I know this verbatim because it’s my favorite quote in the book. Somebody over coffee said, “Are you sure you’re not overthinking your thesis? You keep talking about geopolitics, but I was there in the 2000s when we were setting up production in China, and I can tell you, we weren’t thinking about geopolitics at all.” That section ends, “Precisely.”

How China Captured Apple
By Bob Davis

The secret sauce to Apple’s outsourcing is what McGee calls “the Apple Squeeze.” To make sure foreign factories can meet Apple’s exacting standards, Apple flies in its engineers to “rigorously train local partners” how to mass produce new technology items at low prices, and “in the process giving away manufacturing knowledge.” In exchange, contractors charge Apple lowball prices and operate at the thinnest of margins, or sometimes no profit at all.

As demanding as Apple is, it has plenty of suitors. Contractors know they can use the knowledge and prominence they gain from working with Apple to win contracts from other companies willing to pay higher prices. Beijing encourages Chinese firms to work with Apple, McGee argues, because it understands that Apple is providing a massive transfer of manufacturing knowhow. Over the years, Apple has trained a huge cadre of Chinese engineers and managers, and its contractors have employed millions of Chinese workers.

How Apple tied its fortunes to China
By Patrick McGee

Today, China accounts for 70 per cent of all smartphone manufacturing, according to Bloomberg Intelligence, and China sports a level of technical sophistication that multiple experts say they struggle to even comprehend. “It’s a really, really highly-evolved ecosystem in China,” says Jay Goldberg, founder of tech consultancy D/D Advisors.

China’s dominance can partly be quantified. In 2021, the number of organisations in the country that had been audited to confirm best practices in “quality management systems” — ISO certification 9001 — was 426,716, or roughly 42 per cent of the global total. For India the figure was 36,505; for the US, it was 25,561.

Getting Rome Right and America Wrong
By Bret Devereaux

In Why Empires Fall: Rome, America, and the Future of the West, Peter Heather and John Rapley set themselves to an all-too-familiar task, drawing lessons from the fall of the Roman Empire to apply to the ever-imminent and somehow never-yet-arriving collapse of the U.S.-led global order. Indeed, comparison between the United States and Rome, particularly its decline, is a well-worn and time-tested genre.

Heather and Rapley’s fundamental insight is a deceptively simple one: that empires by their very nature alter the structures of wealth and power that enabled their emergence in the first place. Rome’s Mediterranean empire created new economic and political power centers both in the provinces and across the frontier. In the same way, they argue that the empire of “the West” has transformed the global landscape. The globalized economy created by the West, initially for the purpose of extraction, nevertheless created fertile soil for the creation of new centers of wealth and production in the developing world. As the centers of economic production shifted along the trade lanes the Western powers had themselves created, their own economies deindustrialized, and the share of global GDP produced in the West began to decline.

Rome’s economic integration, initially established so that the imperial center could more efficiently exploit its conquered provinces, led to the emergence of new wealth and new elites in the provinces, which in turn shifted the centers of power. This is delightfully illustrated by the contrast between the provincial poet-turned-politician Decimus Magnus Ausonius hailing from fourth-century Gaul and his contemporary, the snobbish blue-blooded Roman senator Quintus Aurelius Symmachus.

Across the frontier, exposure to the Roman economy created new centers of population and wealth outside Rome’s borders. That wealth, combined with the military pressure of living next door to the fearsome Roman military machine, in turn empowered leaders across the Rhine and Danube to form ever larger, more durable, and more formidable polities. At the same time, renewed Roman aggression in the East in the second century C.E. led to the emergence of a stronger peer competitor, the Sassanid Empire, locking Rome into a series of expensive conflicts it could ill afford. Rome shaped its own enemies—and as a demonstration of the thesis that empires decline in part because they transform their own foundations and then fail to adapt to those changes, the treatment of the late Roman Empire is both capable and useful.

While the book is titled Why Empires Fall, it does not compare two empires but rather one empire, the Roman one, with a rather loose grouping of states and cultures, the West. Many of these Western states had actual empires—and saw other Western imperial powers as their main competitors. The alchemy by which the authors transformed the sweep of European and Euro-American power from 1800 to the present into a singular empire of “the West” can be a deceptive one. For one, this lumps together three rather distinct eras: a period of multipolarity defined primarily by Western European empires before the World Wars; a period of bipolarity defined by the United States and the Soviet Union from World War II to 1991; and finally an era of U.S. preeminence from 1991 to, arguably, the present.

The conflation of these eras into a single “West” serves to obscure real differences in these three different periods. The great movement of the West’s Asian and African “periphery” out of extreme poverty belongs not to the colonial era before the World Wars but to the postwar era and indeed primarily to the post-Cold War era. Likewise, the rapid growth of the Chinese economy and concomitant rising influence of the People’s Republic of China have been largely a post-Cold War phenomenon.

This need not be fatal to the book’s underlying argument about empires and transformation, but it implies not one monolithic Western imperial order so much as a succession of imperial orders, each undermining its own foundations in turn.

In 1898, the U.S. was entranced by empire. The legacy lingers.
By Philip Kennicott

The politics of imperialism are complex and made for what seem today strange alliances. The Anti-Imperialist League, organized in 1898, opposed U.S. expansionism on the principled grounds that it was contrary to the basic ideals of self-governance and independence enshrined in the United States’s founding documents. But prominent members of the group, including the labor leader Samuel Gompers and Sen. Benjamin Tillman of South Carolina, argued that annexing new territories would bring a flood of non-White people into the United States. Racism against Black, Brown and Asian people was a powerful motivator for many who opposed U.S. overseas policy at the time.

Colonialism and imperialism were a kind of cleaning-up operation, interventions in places where things seemed too chaotic or confused for democracy or self-governance. The Cuban revolution against Spain saw atrocities on both sides, and newspaper accounts of Spanish cruelty helped enlist sympathy for the Cuban revolutionaries. The coup in Hawaii and later the annexation were marketed as a necessary steps against political disorder.

In 1898, the preferred terminology for U.S. imperialism used words like territory or possessions, rather than colonies. Throughout much of the past 125 years, the United States avoided the naked land grabs of 1898, opting for political, economic and military interventions that often brought yet more chaos to places perceived as too chaotic to govern themselves.

The habits of mind that perpetuate imperialism, especially the blunt dichotomy between order and chaos, remain in force today, in the United States and other countries with imperial legacies and ambitions. The most costly and destructive foreign policy decision of the past half century, the 2003 invasion of Iraq, was sold largely as an effort to bring stability and order to a country that was falsely advertised to have weapons of mass destruction. Vladimir Putin has used the idea of chaos on his borders to market his invasion of Ukraine.

The American Empire in (Ultimate?) Crisis
By Alfred McCoy

Since the closing months of the Cold War, mismanaging relations with Ukraine has been a curiously bipartisan project. As the Soviet Union began breaking up in 1991, Washington focused on ensuring that Moscow’s arsenal of possibly 45,000 nuclear warheads was secure, particularly the 5,000 atomic weapons then stored in Ukraine, which also had the largest Soviet nuclear weapons plant at Dnipropetrovsk.

During an August 1991 visit, President George H.W. Bush told Ukrainian Prime Minister Leonid Kravchuk that he could not support Ukraine’s future independence and gave what became known as his “chicken Kiev” speech, saying: “Americans will not support those who seek independence in order to replace a far-off tyranny with a local despotism. They will not aid those who promote a suicidal nationalism based upon ethnic hatred.” He would, however, soon recognize Latvia, Lithuania, and Estonia as independent states since they didn’t have nuclear weapons.

When the Soviet Union finally imploded in December 1991, Ukraine instantly became the world’s third-largest nuclear power, though it had no way to actually deliver most of those atomic weapons. To persuade Ukraine to transfer its nuclear warheads to Moscow, Washington launched three years of multilateral negotiations, while giving Kyiv “assurances” (but not “guarantees”) of its future security — the diplomatic equivalent of a personal check drawn on a bank account with a zero balance.

Under the Budapest Memorandum on Security in December 1994, three former Soviet republics — Belarus, Kazakhstan, and Ukraine — signed the Nuclear Non-Proliferation Treaty and started transferring their atomic weapons to Russia. Simultaneously, Russia, the U.S., and Great Britain agreed to respect the sovereignty of the three signatories and refrain from using such weaponry against them. Everyone present, however, seemed to understand that the agreement was, at best, tenuous. (One Ukrainian diplomat told the Americans that he had “no illusions that the Russians would live up to the agreements they signed.”)

Meanwhile — and this should sound familiar today — Russian President Boris Yeltsin raged against Washington’s plans to expand NATO further, accusing President Bill Clinton of moving from a Cold War to a “cold peace.” Right after that conference, Defense Secretary William Perry warned Clinton, point blank, that “a wounded Moscow would lash out in response to NATO expansion.”

Nonetheless, once those former Soviet republics were safely disarmed of their nuclear weapons, Clinton agreed to begin admitting new members to NATO, launching a relentless eastward march toward Russia that continued under his successor George W. Bush. It came to include three former Soviet satellites, the Czech Republic, Hungary, and Poland (1999); three one-time Soviet Republics, Estonia, Latvia, and Lithuania (2004); and three more former satellites, Romania, Slovakia, and Slovenia (2004). At the Bucharest summit in 2008, moreover, the alliance’s 26 members unanimously agreed that, at some unspecified point, Ukraine and Georgia, too, would “become members of NATO.” In other words, having pushed NATO right up to the Ukrainian border, Washington seemed oblivious to the possibility that Russia might feel in any way threatened and react by annexing that nation to create its own security corridor.

In those years, Washington also came to believe that it could transform Russia into a functioning democracy to be fully integrated into a still-developing American world order. Yet for more than 200 years, Russia’s governance had been autocratic and every ruler from Catherine the Great to Leonid Brezhnev had achieved domestic stability through incessant foreign expansion. So, it should hardly have been surprising when the seemingly endless expansion of NATO led Russia’s latest autocrat, Vladimir Putin, to invade the Crimean Peninsula in March 2014, only weeks after hosting the Winter Olympics.

Europe Is Reliving Its ‘Great Illusion’
By Caroline de Gruyter

“The great nations of Europe do not destroy the trade of the small nations for their own benefit, because they cannot; and the Dutch citizen, whose Government possesses no military power, is just as well off as the German citizen, whose government possesses an army of two million men, and a great deal better off than the Russian, whose Government possesses an army of something like four million. … All of which carries with it the paradox that the more a nation’s wealth is militarily protected the less secure does it become.”

That is a key passage from The Great Illusion by Norman Angell (1872-1967), first published in 1909.

If war still happened to break out, Angell argued, it would be likely to soon stop anyway because all participants would realize how foolish it was. Economic interdependence was, in his view, the ultimate guarantor of lasting peace on the continent.

If there is one lesson to be learned from Putin’s wars in Chechnya, Georgia, Ukraine, and also Syria and North Africa, it is that economic rationale does not weigh heavily on his choice to use force. The decision to go to war to restore an old empire is irrational by definition.

If there is any consistency in Putin’s reasoning, it is that economic arguments hardly come into the equation. Unfortunately, many Europeans seem to have forgotten not just the experience of war itself, but also the insight in the reasons why nations go to war. Each time in the past 25 years that Putin threatened to invade a country, many reacted by saying that he will probably not do it. Each time, when Putin then did invade, they fell off their chairs in almost all European chancelleries because they didn’t expect it.

Two world wars eventually persuaded Norman Angell to admit that his theory—for which he received the Nobel Peace Prize in 1933—was flawed. In the early 1930s, he acknowledged that while it is crucial to uphold democratic values and norms, countries need to be able to defend themselves against foreign aggression, and that those who decide not to defend themselves are actually inviting war by turning themselves into easy prey. As the Romans said, si vis pacem para bellum—if you want peace, prepare for war.

In 1914, Angell campaigned hard to keep Britain out of the war. By 1940, he wanted it to join the war as soon as possible. Less than a decade later, he became a staunch supporter of NATO.

Putin’s Prepared Remarks at 43rd Munich Conference on Security Policy
By Vladimir Putin

Only two decades ago the world was ideologically and economically divided and it was the huge strategic potential of two superpowers that ensured global security.

This global stand-off pushed the sharpest economic and social problems to the margins of the international community’s and the world’s agenda. And, just like any war, the Cold War left us with live ammunition, figuratively speaking. I am referring to ideological stereotypes, double standards and other typical aspects of Cold War bloc thinking.

The unipolar world that had been proposed after the Cold War did not take place either.

However, what is a unipolar world? However one might embellish this term, at the end of the day it refers to one type of situation, namely one centre of authority, one centre of force, one centre of decision-making.

It is world in which there is one master, one sovereign. And at the end of the day this is pernicious not only for all those within this system, but also for the sovereign itself because it destroys itself from within.

And this certainly has nothing in common with democracy. Because, as you know, democracy is the power of the majority in light of the interests and opinions of the minority.

Incidentally, Russia – we – are constantly being taught about democracy. But for some reason those who teach us do not want to learn themselves.

Today we are witnessing an almost uncontained hyper use of force – military force – in international relations, force that is plunging the world into an abyss of permanent conflicts. As a result we do not have sufficient strength to find a comprehensive solution to any one of these conflicts. Finding a political settlement also becomes impossible.

We are seeing a greater and greater disdain for the basic principles of international law. And independent legal norms are, as a matter of fact, coming increasingly closer to one state’s legal system. One state and, of course, first and foremost the United States, has overstepped its national borders in every way. This is visible in the economic, political, cultural and educational policies it imposes on other nations. Well, who likes this? Who is happy about this?

In international relations we increasingly see the desire to resolve a given question according to so-called issues of political expediency, based on the current political climate.

And of course this is extremely dangerous. It results in the fact that no one feels safe. I want to emphasise this — no one feels safe!

The Lockerbie bombing: What you need to know
By Reuters

* On Dec. 21, 1988, the flight to New York blows up over Scotland minutes after takeoff from London. The bombing kills all 259 people aboard the Boeing 747 jumbo jet and 11 residents of the town of Lockerbie.

* On Jan. 31, 2001, a three-judge Scottish court at a former U.S. base in the Netherlands finds Libyan intelligence agent Abdel Baset Ali al-Megrahi guilty while acquitting another agent, Lamen Khalifa Fhimah. The trial follows years of wrangling between Libya, Britain and the United States.

* On Aug. 15, 2003, Libya, in a letter to the United Nations, accepts responsibility for the Lockerbie bombing. Libya pays more than $2 billion in compensation. On Sept. 12, the U.N. Security Council unanimously adopts a resolution lifting sanctions imposed on Libya over the bombing.

How Libya kept migrantsout of EU – at any cost
By Matthew Carr

For more than a decade the booming Libyan economy has been a destination for legal and illegal migrants from Africa and even further afield in Bangladesh and China. The extended Libyan coast has also been a springboard for undocumented migration into Europe. Following Tony Blair‘s famous kiss in 2004, Gaddafi entered into a series of agreements with the European Union and individual governments, in which Libya effectively became a co-partner in enforcing Europe’s ‘externalised’ border controls.

Joint naval patrols with Italy; laws penalising illegal immigration; a crackdown on ‘people smugglers’; new detention centres and deportation procedures; readmission agreements on migrants intercepted at sea ­ all these developments reflected Libya’s new willingness to cooperate with Europe’s exclusionary agenda and many of them were part-financed by the EU and Italy. Humanitarian considerations were conspicuously absent from these arrangements. Libya has never signed the 1951 Geneva Refugee Convention and has no functioning asylum system. Economic migrants and refugees in Libya were routinely detained for months and even years in horrendously overcrowded detention centres, where rape, violence and torture were common. Each year thousands of African migrants were deported to the Libyan/Sudanese border and abandoned in the desert, or flown back to their countries of origin on deportation charter flights without being screened to find out whether they were in need of refugee protection. There were also allegations of Libyan coastguard patrols opening fire on migrant boats. These practices were not unknown: many of them were contained in reports by Amnesty International and other human rights organisations. But the same governments that have so vociferously condemned Gaddafi’s current violations of human rights had little to say about Libya’s brutal and inhuman treatment of migrants.

From Europe’s point of view, this system worked extremely well. In Italy and Malta, there was a dramatic reduction in the numbers of migrant arrivals in 2009 as a result of Libyan collaboration. By outsourcing its border controls to a dictatorship without a functioning asylum system, Europe was able to prevent asylum-seekers from reaching its shores, while continuing to proclaim its commitment to the principle of refugee protection. Other EU neighbours have also acted as buffer zones in Europe’s immigration controls, including Tunisia and Morocco, Ukraine and Turkey. But none has been as ruthlessly effective as Libya.

Gaddafi seeks EU funds to curb illegal migrants
By Financial Times

Muammer Gaddafi, Libya’s leader, urged the European Union on Monday to give €5bn a year to his country to fight illegal immigration in the Mediterranean.

“Libya turns to the European Union to support what Libya asks because Europe, in the future, might not be Europe any more but might turn black because of all the illegal immigrants”, said Col Gaddafi on the second anniversary of the Libyan-Italian friendship treaty.

A 2009 deal, under which Libya pledged to help stop the flow of illegal immigrants from Africa to Italy, has been condemned by the Vatican and by human rights organisations. The UN estimated that in the first four months of the treaty alone, Italy sent back 1,000 Africans it had intercepted in international waters, without screening them for refugee status.

The Libyan leader arrived in the Italian capital on Sunday for a two-day visit – his third in the past year. Under the 2008 accord, Italy pledged to pay some $5bn over 25 years as reparations for its colonial rule of the north African state, which lasted from 1911 to 1943. In return, Libya made a deal regarding asylum seekers and gave Italian companies priority in infrastructure projects.

Tony Blair and Colonel Gaddafi: The questions the former Prime Minister faces over his ties with Libya
By Kim Sengupta

One of the most contentious aspects of the relationship between the Blair government and the Gaddafi regime is terrorism. In 2004 the British Prime Minister emerged from the tent in Sirte to say how struck he was that Col Gaddafi wanted to make “common cause with us against al-Qaeda, extremists and terrorism”.

What remained hidden from the public was what making “common cause” entailed. After the revolution, journalists including myself discovered documents in Tripoli charting how British officials had played a part in the abduction of two opposition leaders, Abdel Hakim Belhaj and Sami al-Saadi of the Libyan Islamist Fighting Group (LIFG), and their families, who were then flown to the regime.

MI5, in an assessment made 11 months after the rendition of Mr Belhaj and Mr Saadi, noted that the two men had jealously guarded the independence of the LIFG. But since their disappearance into the regime’s prison an emergent leadership has pushed for a merger with international terrorist organisations.

Two years later a new leader, Abu Laith al-Libi, announced that LIFG was joining al-Qaeda. Libi went on to organise suicide attacks in Afghanistan against American and British targets before being killed in a drone strike.

Critics claim the Gaddafi regime’s policy of repression, backed by the Blair government, helped fuel anger on the streets of Libya, which led to the revolution and its aftermath of strife and extremism.

A brief history of UK-Libya relations
By Simon Hooper

On a Sunday morning in February 2011, William Hague, the British foreign secretary, received an unexpected phone call at home from Saif al-Islam Gaddafi, the well-connected son of Libyan leader Muammar Gaddafi.

“He came on the line to say there was now an alarming situation in Libya, and on behalf of his government he was asking the United Kingdom and the West to help them to suppress the rebellion,” Hague recalled last week in front of a parliamentary inquiry examining the Britain’s relations with Libya and its part in the NATO-led intervention that contributed to Gaddafi’s downfall.

“He had clearly formed a view that this was a friendly country, and that a friendly country with which you have a personal relationship will intervene on your behalf when you are in trouble … I think he was under a misapprehension about that, but he was not under it for many minutes,” Hague continued.

Yet the exchange tells its own story of the close links that had briefly flourished between the British and Libyan governments, even despite decades of hostility marked by the alleged Libyan bombing of an American airliner over Lockerbie, Scotland, in 1988, the killing of a female police officer by a gunman shooting from the Libyan embassy in London in 1984, and British support for US air strikes against Libya in 1986.

Today, the inquiry will hear from Tony Blair, the former prime minister whose handshake with his host in a tent on the outskirts of Tripoli in 2004 – dubbed the “Deal in the Desert” – set the tone for Gaddafi’s transformation from an international pariah into a favoured ally.

British oil firms benefited from the deal in the form of lucrative contracts, while the Gaddafi family moved billions of dollars in assets to London and gained access to influential circles, with Saif acquiring a PhD from the London School of Economics and subsequently facing plagiarism accusations over his outsourcing of the research to a consultancy firm.

The EU, NATO and the Libya crisis: Scaling ambitions down?
By Stefano Marcuzzi

In March 2011, a coalition of countries under the United Nations (UN) umbrella and led militarily by the North Atlantic Treaty Organization (NATO) launched an air campaign in support of a series of revolts against the regime of Mu’ammar Qadhafi in Libya, ostensibly to stop Qadhafi’s reprisals on civilians. By the end of October, Qadhafi was dead and his regime had collapsed. A number of NATO countries, as well as the European Union (EU) committed themselves to supporting Libya’s stabilization and democratization. But in fact, the intervention’s aftermath saw the disintegration of the country. The following decade was punctuated by military escalations, culminating in two further wars in 2014-15 and 2019-20, which were at the same time intra-Libyan wars and proxy wars waged by regional and global powers such as Russia, Turkey, Egypt, the United Arab Emirates (UAE), Qatar, France, and Saudi Arabia, each supporting a local party in the conflict with weapons, mercenaries and occasionally some regular forces.

By the time NATO intervened with Operation Unified Protector (OUP) on 31 March, four operations were already ongoing against Qadhafi (one French, one British, one American and one Canadian). The Alliance, heavily committed in Afghanistan, was unhappy to be involved in Libya, and was divided. Some member states, such as Britain and Italy, were willing to use NATO assets; many more, including France (the most active promoter of the anti-Qadhafi intervention), Germany, Turkey and the Eastern European members were either unconvinced or totally uninterested, each for its own reason. The US, for its part, supported the intervention but wanted to take a backseat – what President Barak Obama famously termed “leading from behind.” The final decision to involve NATO in the conflict was dictated by military considerations, namely the need for better coordination, clearer command and control and stronger assets as the war lasted for longer than anticipated.

At the political level, and in contrast to previous NATO operations, the direction of the war was not given to the North Atlantic Council (NAC), the political body of the Alliance, but to a Libya Contact Group made of delegates of the countries participating in the coalition. NATO was only tasked with the military operations. That meant that NATO could not exercise political oversight of non-NATO members of the coalition, some of which – especially the Qataris and the Emiratis – played a critical role in arming and training some Libyan rebel factions bilaterally.

At the strategic level, the Libya Contact Group opted for the stricter possible interpretation of UN Security Counsil Resolution (UNSCR) 1973 authorizing “all necessary means,” short of “occupation forces,” to protect Libyan civilians and adopted the “no boots on the ground” rule for NATO. This served to meet anxieties in NATO countries that they would not be dragged into a prolonged nation-building endeavor, and to foster greater international support for the operation. But without ground support, NATO’s air assets experienced growing difficulty in distinguishing between rebel forces and Qadhafi loyalists, which slowed down operations and led to increasing (perhaps avoidable) collateral damage. This also spurred individual member states, in particular France and Britain, to pursue a number of initiatives parallel to OUP, including the deployment of Special Forces which was seen internationally as a violation of the “no boots on the ground” rule and which alienated important players, including the African Union and Russia.

The Libyan crisis is revealing of a trend of “bold commitment but compromised means” common to both NATO and the EU. NATO’s half-hearted 2011 intervention left a power vacuum from which a number of threats to NATO member states emerged; that vacuum has eventually provided Russia – NATO’s main rival – with a foothold on a strategic region, rich in hydrocarbon resources. For the EU, the Libyan crisis is the story of a short-circuit between the Union’s foreign policy paradigm based on soft power, and the needs of a hard security crisis. Both organizations failed to fulfil their promises to the Libyan people, and lost leverage in the region as a result. This calls into question the rationale and future of Western/liberal crisis management.

The Libyan Coast Guard Is Not What It Seems
By Michael Scott Moore

The trafficking routes through North Africa trace old slave routes, according to Jason Pack, the author of Libya and the Global Enduring Disorder—and stories of migrants forced to work as slaves in Libya are common. A CNN team filmed a slave market in 2017, jerky footage of twelve migrants from Niger auctioned in a nighttime Libyan yard. “Big strong boys for farm work,” offers a smuggler in Arabic. Women have to clean houses or work as prostitutes; men work on farms or construction sites.

Each ride on a shoddy boat toward Europe may cost hundreds or even thousands of dollars, and the price is separate from the cost of travel overland through Africa. For distant families on the receiving end of panicked phone calls from unknown parts of the world, these prices amount to ransoms to free their relatives from the stinking, inhuman detention camps.

Meanwhile, Italy and the European Union shovel tens of millions of euros at the Libyan Coast Guard, which even in its most buffed-up description is a patchwork fleet of old Italian vessels crewed by Kalashnikov-brandishing men with a glancing relationship to one of the warring militias on shore. It is not a normal agency. The ships we saw from the Humanity 1 behaved not just as enforcers of EU policy but as a long arm of traffickers on land. “The Coast Guard has always worked with the Italians to deter migrants,” Pack told me. “Some of them take money to [help] smuggle the migrants, and then take money to push them back.” The trafficking business on the Mediterranean, in other words, is like a washing machine, churning people out and back for profit.

But the angle taken by most Western journalists is that the European Union simply outsources its so-called dirty work to the Libyan Coast Guard, meaning that the EU pays the Libyans to ensure migrant boats never travel as far as the central Mediterranean, where they become boats in distress to be rescued by ships like the Humanity 1. This much is true, but it leaves the lazy impression that Libyan interceptions can reduce human trafficking. Most people hear it and avert their eyes, because until a dozen years ago, the EU outsourced its dirty work to former Libyan leader Muammar al-Qaddafi , who leveraged a bog-standard racist fear of immigrants in Europe to negotiate profitable aid packages for himself. He did suppress the smugglers, at least before he was overthrown during the Arab Spring by NATO air raids and his own people.

Libya finds two mass graves with bodies of nearly 50 migrants, refugees
By Al Jazeera

Libya authorities have uncovered nearly 50 bodies from two mass graves in the country’s southeastern desert, in the latest tragedy involving people seeking to reach Europe through the North African country.

Al-Abreen, a charity that helps migrants and refugees in eastern and southern Libya, said that some of the people found in the mass graves had been shot and killed before they were buried.

Mass graves containing the bodies of asylum seekers have previously been discovered in Libya, the main transit point for migrants from Africa and the Middle East trying to make it to Europe.

Last year, authorities unearthed the bodies of at least 65 migrants in the Shuayrif region, south of the capital Tripoli.

Human traffickers have benefited from more than a decade of instability, smuggling migrants and refugees across the country’s borders with six nations, including Chad, Niger, Sudan, Egypt, Algeria and Tunisia.

Rights groups and United Nations agencies have for years documented systematic abuse of asylum seekers in Libya including forced labour, beatings, rapes and torture. The abuse often accompanies efforts to extort money from families before they are allowed to leave Libya on traffickers’ boats.

Those who are intercepted and returned to Libya are held in government-run detention centres where they suffer from abuse, including torture, rape and extortion, according to rights groups and UN experts.

Trump Administration Plans to Send Migrants to Libya on a Military Flight
By Eric Schmitt, Hamed Aleaziz, Maggie Haberman and Michael Crowley

The Trump administration is planning to transport a group of immigrants to Libya on a U.S. military plane, according to U.S. officials, another sharp escalation in a deportation program that has sparked widespread legal challenges and intense political debate.

The Libya operation falls in line with the Trump administration’s effort to not only deter migrants from trying to enter the country illegally but also to send a strong message to those in the country illegally that they can be deported to countries where they could face brutal conditions. Reuters earlier reported the possibility of a U.S. deportation flight to Libya.

The potential use of Libya as a destination comes after the administration set off an earlier furor by deporting a group of Venezuelans to El Salvador, where they are being held in a maximum-security prison designed for terrorists.

The State Department warns against traveling to Libya “due to crime, terrorism, unexploded land mines, civil unrest, kidnapping and armed conflict.” The country remains divided after years of civil war following the 2011 overthrow of its longtime dictator, Muammar Gaddafi. A United Nations-recognized government in Tripoli rules western Libya, and another in Benghazi, led by the warlord Khalifa Haftar, controls the east.

What a Decade-Old Conflict Tells Us About Putin
By Kim Ghattas

In epochal moments such as these, when a strongman uses the might of his military to invade another country, we often look backwards to search for the moments that brought us to the present, seeking out signs of what was to come. In the case of Putin and Russia, this effort has concentrated on his domestic political evolution, and his relations with the occupants of the White House. Yet one can trace a straight line from the Libya episode—in which Putin’s country initially stood on the sidelines, and which occurred when he was on his four-year hiatus from the presidency in the prime minister’s office—to today’s devastating war in Ukraine.

Having largely gotten away with the takeover of the breakaway Georgian regions of South Ossetia and Abkhazia in 2008, Putin saw the Libya intervention as the result of a chain of revolutions followed by Western military interventions that could eventually reach him. And he saw in Gaddafi someone who had accepted the West’s terms and yet nevertheless paid the price, a fate that could ultimately await him. The lesson is a dire one for Ukraine: In Putin’s current worldview, backing down or making any concessions is a death sentence.

Rewind to the Arab uprisings of 2011. After the fall of Tunisia’s Zine el-Abidine Ben Ali and Egypt’s Hosni Mubarak, street protests engulfed much of the region, including Libya and Syria. Gaddafi threatened to crush the protesters like “cockroaches.” France and Britain were agitating to intervene. The Obama administration first dragged its feet before throwing its weight behind efforts to establish a United Nations–backed no-fly zone.

U.S. administration officials such as then–Vice President Joe Biden pressed the issue with Dmitry Medvedev, Russia’s president at the time, and Secretary of State Hillary Clinton helped seal the deal with Foreign Minister Sergey Lavrov over the phone, standing backstage at a television studio after a town-hall event in Tunis. The UN resolution authorized “all necessary measures” to protect civilians in Libya, which included, but was not limited to, a no-fly zone. Russia would not approve the resolution, so the Obama administration was hoping that it would at least abstain, rather than veto the move, during the vote at the Security Council.

Russia’s abstention was seen by the Obama administration as a diplomatic success. Putin, however, saw it as proof of the West’s treachery. He described the resolution as a “medieval call for a crusade,” another war in a long line of wars initiated by the West—from Serbia to Afghanistan to Iraq—to pursue regime change, sometimes under false pretexts, and ultimately dictate the rules of the global order.

According to several accounts, including current CIA chief William Burns’s book The Back Channel, Putin frequently replayed the gruesome footage of Gaddafi being captured in a drainage pipe, being beaten to death. The capture, trial, and execution of Saddam Hussein did not seem to affect Putin as much. He had flippantly told French President Nicolas Sarkozy that he would hang Georgian President Mikheil Saakashvili just as “the Americans had hanged Saddam Hussein.” But the lesson Putin drew from Libya was different: Being a pariah had served Gaddafi best; only when he had opened up to the West had they come after him.

Russia Has Been Warning About Ukraine for Decades. The West Should Have Listened
By Anatol Lieven

The Yeltsin government protested strongly against the start of NATO expansion in the 1990s and Russia accustomed itself without too much trouble to NATO membership for the former Soviet satellites in Central Europe. But from the very beginning of NATO expansion in the mid-1990s, Russian officials and commentators—including liberal reformists—warned that an offer of NATO membership to Georgia and Ukraine would bring confrontation with the West and an acute danger of war. These warnings were echoed by George Kennan, the original architect of the strategy to contain the USSR and the State Department’s greatest ever Russia expert, as well as by Henry Kissinger and other leading American statesmen.

There is nothing mysterious, extreme, or Putinesque about this Russian attitude. In the first place, Western language about NATO expansion establishing a “Europe whole and free” implies the exclusion of Russia from Europe and from a role in Europe—a matter of deep offence to Russians, and Russian liberals in particular, especially since this Western rhetoric was imbued with the assumption (a racist one, by the way) that the word “European” equates to “civilized.” And that Russia isn’t part of that idea.

Russian fears about the expansion of a potentially hostile military alliance to Russia’s borders should be understandable to any American who has heard of the Monroe Doctrine. In Georgia and Ukraine, there are also specific issues inherited from the Soviet Union: in the case of Georgia, the separatist movements of the Abkhaz and Ossete minorities and the resulting ethnic conflicts involving Russia. Such ethnic conflicts over territory, with the involvement of outside powers, are all too common during and after the fall of empires: think for example of Turkey’s invasion of Cyprus in 1974, and Ankara’s establishment of an (internationally unrecognised) mini-state for the Turkish Cypriot minority.

None of this is intended to justify Russia’s actions, which have often been stupid as well as criminal—as with the annexation of Crimea in 2014. However, they are hardly unusually so in the context of the fall of empires and their aftermath. Did the British really behave much better as their empire came apart, let alone the French, Portuguese or Turks? A Frenchman may believe that they did; an Algerian might have a different opinion.

Much more importantly, these Russian policies have been linked to a specific set of post-Soviet issues and Russian regional goals. They are not part of some grand malign design to destroy international order, or to act as a wilful “disruptor.” Insofar as Russia has set out deliberately to damage Western interests (for instance with social media disinformation campaigns) it has been as a way to put pressure on the West in pursuit of those goals. It may also be pointed out that in the Middle East, it is the U.S. that has frequently acted as a disruptor as with the invasion of Iraq, the destruction of the Libyan state, and Trump’s decision to abandon the nuclear agreement with Iran, while Russia has often defended the status quo—partly due to a fear of Islamist terrorism that it shares with the U.S.

In other words, while the terms of any compromise with Russia over Ukraine would involve some tough negotiation, we can seek such a compromise without fearing that this will open the way for further Russian moves to destroy NATO and subjugate eastern Europe—a ridiculous idea for anyone who knows either the goals of the Russian establishment or the character of Poles and Estonians. These nations are in NATO and are categorically committed to remaining so. There is no way for Russia to remove them without a direct attack on NATO—a hideously dangerous undertaking that forms no part of the Russian establishment’s plans. NATO is in fact entirely safe within its existing borders. The threat to NATO’s safety and prestige are largely of its own making with its empty commitments to countries that it has neither the will nor the ability to defend.

For years, Putin didn’t invade Ukraine. What made him finally snap in 2022?
By Anatol Lieven

In 2014, the Ukrainian army was hopelessly weak; in Viktor Yanukovych, the Russians had a pro-Russian, democratically elected Ukrainian president; and incidents like the killing of pro-Russian demonstrators in Odesa provided a good pretext for action.

The reason for Putin’s past restraint lies in what was a core part of Russian strategy dating back to the 1990s: trying to wedge more distance between Europe and the United States, and ultimately to create a new security order in Europe with Russia as a full partner and respected power. It was always clear that a full-scale invasion of Ukraine would destroy any hope of rapprochement with the western Europeans, driving them for the foreseeable future into the arms of the US. Simultaneously, such a move would leave Russia diplomatically isolated and dangerously dependent on China.

Putin now seems to agree fully with Russian hardline nationalists that no western government can be trusted, and that the west as a whole is implacably hostile to Russia. He remains, however, vulnerable to attack from those same hardliners, both because of the deep incompetence with which the invasion was conducted, and because their charge that he was previously naive about the hopes of rapprochement with Europe appears to have been completely vindicated.

It is from this side, not the Russian liberals, that the greatest threat to his rule now comes; and of course this makes it even more difficult for Putin to seek any peace that does not have some appearance, at least, of Russian victory.

Anatol Lieven – The flimsy UK, France, Ukraine ‘peace plan’ discussed Sunday
By Anatol Lieven

The behavior of the European governments is shaped by a belief in limitless Russian territorial ambition, hostility to the West, and reckless aggression that if genuinely held, would seem to make any pursuit of peace utterly pointless. The only sensible Western strategy would be to cripple or destroy Russia as a state — the only problem being, as Trump has stated, that this would probably lead to World War III and the end of civilization.

Evil Empires?
By Lauren Benton

Most European empires may have unraveled in the twentieth century, but their legacies remain. When Russian President Vladimir Putin refused to call the February invasion of Ukraine a war, he was reading from an imperial script. In Putin’s view, Ukraine was never a true nation-state. It was a former piece of the Russian empire later absorbed into a rival imperial fold, one dominated by the United States and its western European allies. By labeling the invasion a “special military operation,” Putin was presenting the war as an act of imperial policing, not military aggression.

Putin’s actions and rhetoric raise some uncomfortable questions for the denizens of former empires. They may reasonably wonder whether any postimperial state can ever free itself from a history of riding roughshod over the political aspirations of less powerful peoples. One part of the answer lies in the degree to which bad conduct in empires was just a limited phase or something deeper, a structural tendency toward unjust, organized violence. Another lies in whether there was any meaningful difference between self-described liberal empires, with their good-on-paper claims about allegiance to the rule of law, and illiberal empires that condoned the arbitrary use of force and the impunity of state actors.

The Field of Geopolitics Offers Both Promise and Peril
By Hal Brands

Geopolitics is the study of how geography interacts with technology and the ceaseless struggle for global power. It came to prominence in an era of titanic clashes to rule the modern world by controlling its central theater, Eurasia. And if geopolitics seemed passé in the post-Cold War era, its relevance is surging now that vicious strategic rivalry has been renewed.

Mackinder, Mahan, and Spykman were trying to navigate an era of global confrontations made more terrifying by new technologies and the dawn of new, more virulent forms of tyranny. All three men were ultimately concerned with whether democratic societies—those that honored “the freedom and rights of the individual,” as Mahan put it—could survive the challenge from those that practiced “the subordination of the individual to the state.” So all three were trying to determine what strategies—and what “combinations of power,” in Mackinder’s phrasing—could underpin a tolerable global order and prevent Eurasian consolidation from ushering in a new dark age.

This democratic school of geopolitics saw a supercontinent run by illiberal powers as a nightmare to be avoided. The authoritarian school saw it as a dream to be achieved.

The latter tradition originated with Swedish academic Rudolf Kjellen and German geographer Friedrich Ratzel in the late 19th century. These thinkers were products of Europe’s cramped, cutthroat geography, and they channeled some of the time’s most toxic ideas.

Kjellen and Ratzel were influenced by social Darwinism: They saw nations as living organisms that must expand or die, and they defined nationhood in racial terms. Their school of thought prioritized the quest for Lebensraum, or “living space,” a term Ratzel coined in 1901. Although this tradition sometimes drew inspiration from the success of the United States in conquering and settling a continent, it blossomed most fully in countries, such as imperial Germany, where expansionist visions and illiberal, militaristic values went hand in hand. And as the history of the subsequent decades would demonstrate, geopolitics with this reactionary, zero-sum bent was a blueprint for unprecedented aggression and atrocity.

The epitome of this approach was Karl Haushofer, a World War I-era artillery commander who took up the cause of German resurrection after that country’s defeat in 1918. For Haushofer, geopolitics was synonymous with expansion. Germany had been mutilated by the Allies after World War I; its only response was to explode “out of the narrowness of her present living space into the freedom of the world,” he wrote. Germany must claim a resource-rich, autarkic imperium across Europe and Africa. He believed that other oppressed, have-not countries—namely Japan and the Soviet Union—would do likewise across the remainder of Eurasia and the Pacific.

Only by consolidating what Haushofer called “pan-regions” could the revisionist states outmatch their enemies; only by working together could they prevent those enemies, namely Britain, from playing divide-and-conquer. The goal of this geopolitics was a Eurasia ruled by an autocratic axis. What Mackinder had warned about, Haushofer—who borrowed liberally from his work—was determined to realize.

There was no pretension that this could be accomplished without mayhem and murder. The world, Haushofer wrote, needed “a general political clearing up, a redistribution of power.” Small countries “have no longer a right to exist.” Haushofer would endorse Germany’s murderous acquisition of “living space” in the late 1930s and early 1940s—he even helped inspire this ghastly campaign.

Haushofer had counseled Adolf Hitler while the latter was imprisoned in the 1920s. Central arguments of Hitler’s treatise, Mein Kampf—such as the importance of eliminating European rivals and the need for resources and space in the east—were pure Haushofer, historian Holger Herwig argued. Hitler’s advocacy of a vast Eurasian land empire as the answer to Anglo-American sea power drew, likewise, on Haushofer’s ideas. History’s most brazen land grab owed to Hitler’s megalomania, pathological racism, and epic thirst for power. It was also underpinned by a geopolitics of evil.

Clashes of countries are clashes of ideas. And one way of interpreting the 20th century is that the democratic school of geopolitics defeated the autocratic one.

In World War I, World War II, and the Cold War, radically revisionist states ran versions of Haushofer’s playbook. Germany, Japan, and the Soviet Union seized vast tracts of Eurasia and contested its neighboring oceans. In areas they controlled, they sometimes governed with homicidal brutality. Yet they were ultimately defeated by global coalitions that were helmed by liberal superpowers and guided by the democracies’ best geopolitical insights.

… there were moral compromises aplenty in these struggles. The Western democracies forged devil’s bargains with Soviet leader Joseph Stalin in World War II and Chinese leader Mao Zedong in the late Cold War. They used tactics—blockades, firebombing, coups, and covert interventions—that could only be justified by their contribution to some higher good. … The democracies wielded power ruthlessly enough to prevent the world’s worst aggressors from ruling its most vital regions.

The Return of the Monroe Doctrine
By Tom Long and Carsten-Andreas Schulz

The tenets of what would posthumously become known as the Monroe Doctrine were first enunciated on Dec. 2, 1823, by then-U.S. President James Monroe during his annual message to Congress—but the passage in question was largely penned by then-Secretary of State John Quincy Adams. Monroe and Adams’s foreign policy contained two main principles. The first was the establishment of what they called “separate spheres” between Europe and the Americas. The second was the affirmation of U.S. opposition to European attempts at reconquest and territorial ambitions in Latin America and the Pacific Northwest.

At inception, the idea was not a doctrine, nor could the fledgling U.S. republic back it up with force. Monroe’s address was initially perceived as a declaration of solidarity against the threat of European conquest, albeit a rather high-handed one. Independence leaders in the former Spanish American colonies took polite notice of Monroe’s address as an expression of tacit support for their cause.

However, when the United States annexed the northern half of Mexico during a war of conquest that lasted from 1846 to 1848, the U.S. policy took on a foreboding cast.

Successive U.S. governments invoked the Monroe Doctrine to ward off other adversaries around the world—the British, the German empire, the Axis powers of World War II, and later the Soviet Union. In Latin America, the doctrine offered countries U.S. protection (whether requested or not) while reserving Washington’s right to define what sort of actions counted as threatening, as well as the right to decide how to respond to them. Inherent paternalism toward the region was soon complemented by outright unilateralism and interventionism.

… at the turn of the century, President Teddy Roosevelt deepened the Monroe Doctrine’s link to unilateral U.S. interventions. Most infamously, his “corollary” to the principle claimed, for the newly powerful United States, a right and duty to police its neighborhood. President Woodrow Wilson—otherwise Roosevelt’s adversary on many foreign-policy questions—largely shared this view of the Monroe Doctrine. Wilson insisted that Monroe be mentioned in the League of Nations Charter to enshrine U.S. unilateral prerogatives.

By this point, even sympathetic Latin Americans had soured on the doctrine—and Monroe became a rallying cry for the region’s nationalists and anti-imperialists. Roosevelt’s interpretation of the doctrine largely displaced those that emphasized solidarity and restraint. The era was infused with an arrogance of racial and civilizational conceits that the United States had both a right and duty to tutor and discipline Latin Americans.

While explicit mentions of the Monroe Doctrine declined, U.S. foreign policy toward the region took on a more interventionist zeal at the height of the Cold War. With the justification of excluding Soviet influence, the U.S. government helped overturn reformist democratic projects across Latin America to install U.S.-friendly dictators—most notoriously in Guatemala in 1954, the Dominican Republic in 1965, and Chile in 1973. Commenting on Chile in 1970, the late U.S. Secretary of State Henry Kissinger said that the “issues are much too important for [Latin American] voters to be left to decide for themselves.”

The Return of Spheres of Influence
By Monica Duffy Toft

Even though another world war is not yet on the horizon, today’s geopolitical landscape particularly resembles the close of World War II, when U.S. President Franklin Roosevelt, British Prime Minister Winston Churchill, and Soviet leader Joseph Stalin sought to divide Europe into spheres of influence. Today’s major powers are seeking to negotiate a new global order primarily with each other, much as Allied leaders did when they redrew the world map at the Yalta negotiations in 1945. Such negotiations need not take place at a formal conference. If Putin, U.S. President Donald Trump, and Chinese President Xi Jinping were to reach an informal consensus that power matters more than ideological differences, they would be echoing Yalta by determining the sovereignty and future of nearby neighbors.

Unlike at Yalta, where two democracies bargained with one autocracy, regime type no longer appears to hinder a sense of shared interests. It is hard power only—and a return to the ancient principle that “the strong do what they can and the weak suffer what they must.”

The term “sphere of influence” first cropped up at the 1884–85 Berlin Conference, during which European colonial empires formalized rules to carve up Africa. But the concept had shaped international strategy long before that. During the 1803–15 Napoleonic Wars, France attempted to expand its influence by conquering nearby territories and installing loyal puppet regimes, only to be countered by coalitions led by the United Kingdom and Austria. The British and Russian Empires engaged in protracted struggles for dominance over Central Asia, particularly Afghanistan. The Monroe Doctrine, adopted in 1823 by the United States, asserted that European powers would not be allowed to interfere in the Western Hemisphere, effectively establishing Latin America as a U.S. sphere of influence.

It is worth noting that the Monroe Doctrine was, in part, inspired by Russian Emperor Alexander I’s efforts to counter British and American influence in the Pacific Northwest by expanding its settlements and asserting its control over trade. In an 1824 accord, however, Russia agreed to limit its southward expansion and acknowledge American dominance over the Western Hemisphere. Alexander I recognized that encouraging further European colonization of the Americas risked sparking more instability and war.

Great powers’ drive to establish spheres of influence persisted through the late nineteenth and early twentieth centuries, shaping new alliances and ultimately triggering World War I. In his wartime effort to delegitimize the Austro-Hungarian, German, and Ottoman Empires, however, U.S. President Woodrow Wilson pointed out that colonialism amounted to an oppressive boot on the neck of nations’ self-determination. In the process, U.S. allies—in particular, France and the United Kingdom—suffered collateral damage and struggled to maintain their colonies in the face of a rising tide of nationalist sentiment. Given the close connection between “spheres of influence” and colonialism, by the end of World War II, both concepts came to be seen as backward and a likely catalyst for conflict.

Yalta marked a decisive return of politics based on spheres of influence, but only because the participating democracies tolerated it as a necessary but hopefully short-lived evil, the best available means to prevent another catastrophic world war. The United Kingdom and the United States had each become war-weary. By August 1945, no democratic politician could reasonably oppose demobilization. Stalin did not suffer from this problem. But if deterrence could not be supplied, the only other way to prevent Stalin from ordering the Red Army westward was to engage his demands.

In the nineteenth century, power politics had hinged on military and economic might. In the second half of the twentieth century, the ability to shape global narratives through soft power became almost as vital: the United States exerted influence through its dominance in popular culture, provision of foreign aid, higher education, and investments in overseas initiatives such as the Peace Corps and democratization efforts. The Soviet Union, for its part, actively promoted communist ideology by mounting propaganda and ideological-outreach campaigns that attempted to shape public opinion in far-flung countries.

Without the stark ideological divide of the Cold War, many political scientists assumed that world politics would shift toward economic interdependence, demonstrating through action the benefits of working in teams to solve hard problems. The global spread of democratic norms and the swift integration of former Soviet and Eastern bloc states into international institutions reinforced the belief that power could—and should—be diffused through collective frameworks; the Cold War’s geopolitical fault lines seemed to vanish.

NATO’s U.S.-led intervention in Kosovo in 1999 (which particularly incensed Putin) and the United States’ 2003 invasion of Iraq (over the objections of close U.S. allies) both suggested that the leaders of the supposed new era of collective security still believed that when a strong state does not get its way, it is acceptable to escalate militarily. More recently, the United States and China have been locked in a struggle for global technological and economic dominance, with Washington imposing sanctions on Chinese tech giants while Beijing invests heavily in alternative supply chains and its massive Belt and Road Initiative. China has also militarized the South China Sea and has pursued expansive and legally disputed territorial claims. The United States and its allies, meanwhile, have increasingly used financial sanctions as tools to constrain adversaries.

It is clear from Putin’s many recent speeches that he had never really abandoned an understanding of geopolitics that rested on spheres of influence and always struggled to understand why NATO should continue to exist, much less to expand. If the alliance’s purpose had been to defend the West against the Soviets, after the Soviet Union collapsed, NATO’s expansion effectively made the entirety of Europe—and particularly the former Warsaw Pact states—an American sphere of influence. For Putin, this was an unacceptable outcome.

The reemergence of spheres of influence signals that the nature of the global order is being tested. This shift could lead to a transition back to the power politics of earlier eras. But there is an alternative: after experiencing a few cycles of destabilizing crises, the international system might reassert itself, reverting to a rules-based order centered on multilateral cooperation, economic globalization, and U.S.-led or collective security arrangements that discourage expansionist ambitions.

For the time being, however, the United States is no longer serving as a reliable stabilizer. Where Washington, until recently, was considered the primary check on regionally expansionist regimes, it now appears to be encouraging those same regimes, and even imitating them.

Trump says U.S. will ‘get Greenland,’ military force may not be needed but not ruled out
By Francesca Chambers

President Donald Trump refused on Saturday to take military force off the table in his quest to acquire Greenland, saying he had an obligation to pursue ownership of the Danish territory that has rebuffed his advances.

“No, I never take military force off the table. But I think there’s a good possibility that we could do it without military force,” Trump said. “We have an obligation to protect the world. This is world peace, this is international security. And I have that obligation while I’m president. No, I don’t take anything off the table.”

Denmark is a NATO ally and Greenland’s residents say they are opposed to becoming a part of the United States.

Still, the U.S. president has pressed ahead. “We need Greenland, very importantly, for international security. We have to have Greenland,” he said on Friday at the White House.

Asked by NBC what message acquiring Greenland would send to Russia, which has sought to expand its own territorial footprint, and the rest of the world, Trump said: “I don’t really think about that. I don’t really care. Greenland’s a very separate subject, very different. It’s international peace. It’s international security and strength.”

Are Western double standards undermining the global order?
By Christoph Hasselbach

“Wherever I go, I find myself confronted with the accusations of double standards,” said EU foreign policy chief Josep Borrell at Oxford University in May. At last year’s Munich Security Conference (MSC), French President Emmanuel Macron said: “I am struck by how much we are losing the trust of the Global South.” And after she criticized China’s human rights record last year, Germany Foreign Minister Annalena Baerbock was told by her Chinese counterpart at the time, Qin Gang: “What China needs least is a teacher from the West.”

… attempts by the West to persuade friendly democratic countries in the Global South to support sanctions against Russia have often failed. This was the case with India, Brazil, and South Africa, all of whom continued to trade with Russia, in some cases more than before the war, even while insisting that this not an endorsement of the Russian invasion of Ukraine.

India’s Foreign Minister Subrahmanyam Jaishankar has said, referring to the war in Ukraine, that the Europeans apparently believe that “Europe’s problems are the world’s problems, but the world’s problems are not Europe’s problems.” And Indonesian Defense Minister Prabowo Subianto Djojohadikusumo said that when it came to advocating peace, Western governments apparently have “one set of principles for Ukraine and another set of principles for the Palestinians.”

Sophie Eisentraut, head of research at the MSC, described a growing problem for the West in her latest security brief, entitled “Standard Deviation – Views on Western Double Standards and the Value of International Rules.” In a world of increasing geopolitical rivalry, she argues, the Western model of a rules-based order is increasingly suffering from a loss of credibility — at a time when the West is losing power.

For example, countries from the Global South point out that the US and other Western states insist on the principle of the territorial integrity in Ukraine, but did not respect this principle during the US-led invasion of Iraq in 2003. Western states have often disregarded human rights by carrying out illegal detentions as part of their war on terror. And the Europeans have made common cause with North African autocrats in order to prevent migration to Europe.

However, Eisentraut also points out that critics from countries such as China and Russia often use their accusations to relativize their own violations. Or they use them to justify an approach to foreign policy that is no longer based on moral principles at all, but only on their own interests. The result is that the value of universal rules is being questioned around the world.

Not only would Western countries have much to lose in a world without generally accepted rules, but so would many countries in the Global South.

Elon Musk Is America’s Cecil Rhodes
By Caroline de Gruyter

Once you have seen it, you cannot unsee it anymore: a cartoon of the British businessman Cecil Rhodes, published in the Dec. 10, 1892, edition of Punch magazine. The cartoon, drawn by Edward Linley Sambourne, is called “The Rhodes Colossus.” It depicts Rhodes, a late 19th-century diamond-mining magnate in South Africa who used his fortune to help the British expand their empire, as a giant in a colonial outfit towering over Africa, a gun slung over his shoulder, with one booted foot firmly planted in Cairo and the other in Cape Town. He holds a telegraph line in his hands. In a phase of accelerated, furious imperialism at the end of the 19th century, when European powers raced to conquer much of Africa, Rhodes planned to build a rail and telegraph line from Cape Town to Cairo, connecting all British African colonies like beads on a string.

You cannot unsee the cartoon, because today, in our time, Elon Musk is to U.S. President Donald Trump more or less what Rhodes was to the British Empire in his day: an oligarch with far-reaching powers and liberties bestowed on him by the state to help it grab as much of the world’s land, waterways, resources, and labor as it can before someone else does.

In a liberal capitalist system, competition is a key element. It is about people and companies creating things they exchange or sell to each other. The idea is that there is plenty of space for everyone. Everyone, in theory at least, should be able to get a little piece of the pie. In the world of liberal capitalism, moreover, there are rules that apply to all.

Predatory capitalism is the stage that can follow, when the feeling of abundance gives way to an acute sense of scarcity, and when exchange changes into conquest. At least, this has already happened twice before in history—in the 17th and 18th centuries, and at the end of the 19th century—and now it seems to be happening again, according to a remarkably lucid book just published by French economist Arnaud Orain: Le monde confisqué; Essai sur le capitalisme de la finitude (XVIe-XXIe siècle).

From Trump’s territorial claims on the Panama Canal, Canada, or Greenland to his constant bullying of closely allied nations or to the greedy billionaires who helped him to get elected and now want to meddle with foreign countries in the name of the world’s most powerful state—today, all this may seem outrageous and almost unreal, but the two earlier predatory capitalist phases were characterized by the same kind of frenzy, hubris, and imperial behavior.

Orain argues that liberal capitalism became a “capitalism of finitude” around 10 years ago, via an intermediary, neoliberal stage (when the rules-based order started to malfunction and partly break down). What defines the capitalism of finitude is that, suddenly, competing powers convince themselves there is not enough for everyone on Earth. So, they unleash a cutthroat rivalry. Existing rules, international treaties, and codes of conduct are furiously cast aside in order to take the world’s waterways, land, resources, data, and human labor—anything they find useful to expand their power.

It is not just Trump and his digital billionaires who do this. It is also Israel, waging all-out war against Gaza and bombing Lebanese villages without any of the restraint that characterized the Israeli military in previous decades, and in total disregard of international law. It is Russia invading its neighbors because President Vladimir Putin wants the old empire back, using former Wagner Group militias to get to Africa’s minerals, and trying to expand its holdings on Spitsbergen (which belongs to Norway) and dominate the Arctic. It is China, too, creating facts on the water in the South China Sea, and being hard at work to establish semi-colonial “stations” in strategic ports and get ownership of arable land (in the book, they are called “ghostly hectares”), plantations, and mines on other continents.

Capitalism of finitude: pessimism and bellicosity
By Branko Milanovic

We are entering, according to Orain, into one of the periodic readjustments of capitalism, from free trade to the “armed trade”, characteristic of mercantilism. Moreover, in Orain’s reading of capitalism, it is the epochs of mercantilism that were more common than the times of laissez-faire and free trade. He considers three such (mercantilist) periods: European conquest of the world (17th and 18th centuries), 1880-1945, and the present.

The most important features of mercantilism are that it regards trade, and perhaps the economic activity in general, as a zero-sum game, and creates the world which is never in full peace nor in full war. The normal state of mercantilism is constant conflict, whether fought by arms or by a multitude of other coercive means (piracy, ethnic cleansing, slavery etc.). Mercantilism implies (i) control of the ways by which the goods are transported which, in the past as now, means the control of the oceans, (ii) preference for vertical integration of production and trade which implies monopolies and monopsonies, and (iii) the struggle for land either as a source of raw materials and food (especially when Malthusian ideologies take over) or of land in the form of harbors and entrepots to complement naval power.

One of the main ideological roles is assigned to the American naval strategist, Alfred Mahan who has formulated what Orain defines as the two “laws”. The first holds that there is a natural progression of a country from being a big producer of goods, as China is now, to needing to ship these goods abroad, and thus to control naval routes. It must become a naval power or ideally a naval hegemon. It also needs to create a set of entrepôts to support its naval deployment. The second Mahan’s law is that there is no clear difference between commercial and war navies. Since trade is “armed”, the distinction between the two largely disappears, and Orain provides many historical examples where the Dutch, English, Swedish, Danish and French whether commercial or war fleets played both roles.

Orain presents an extraordinary rich historical canvass of the European conquest, and intra-European “demi-wars” in foreign lands during the 17th and 18th centuries. Companies such as the Dutch, British and French East India, West Africa and the like play the key role. Orain highlights that the companies had often taken government functions (most famously in the case of East India Company), by extracting the “regalian” rights from home governments, and by imposing by force themselves over the governments of the conquered lands.

Out of our minds: opium’s part in imperial history
By Lewis Dartnell

In a natural world full of dangers, it is probably safer to follow everyone else, even if you’re not convinced it’s the best course of action, rather than risk going it alone. Such herd behaviour is a way of crowdsourcing information – others may know something we don’t – and can serve as a quick judgment tool, allowing us to economise on the time and cognitive effort in deciding everything for ourselves.

Our herding bias has caused the surges of fads and fashions throughout history. It influences the adoption of other cultural norms, religious views or political preferences as well. But the same psychological bias also destabilises markets and financial systems. The dotcom boom of the late 1990s, for instance, was driven by investors piling in to back internet companies even though many of the startups were not financially sound. Investors followed one another, assuming that others had a more reliable assessment or simply not wanting to be left behind in the frenzy, only for the bubble to burst and stock markets to fall sharply after early 2000. Such speculative bubbles have recurred through history since “tulip mania” in the early 17th-century Netherlands, and the same herding behaviour is behind modern boom and bust cycles such as in cryptocurrency markets.

One of the most salient aspects of humanity is how, as an intelligent, self-aware species, we actively seek ways to alter our state of mind. We exploit the botanical world not only to feed ourselves, but to intentionally modify the functioning of our brain – to stimulate, to calm or to induce hallucinations. Indeed, enjoying getting out of our own minds is pretty much a universal of human cultures. The pursuit of money and power has found profitable ground in the human desire for altered states, and played a part in shaping human history.

The demand for tea in Britain had grown steadily throughout the 18th century. By the 1790s, most of it came from China, with the East India Company shipping around 10,000 tonnes of tea leaves from east Asia to London every year. But there was one major problem: China had little interest in anything the British empire could offer in return. The Qianlong Emperor wrote to King George III in 1793, “Our Celestial Empire possesses all things in prolific abundance and lacks no product within its borders. There was therefore no need to import the manufactures of outside barbarians in exchange for our own produce.” Britain was facing a colossal trade deficit.

The only European commodity that China desired was hard cash in the form of silver. Throughout the second half of the 18th century, therefore, about 90% of Britain’s trade exports to China were bullion. The British government was struggling to raise enough silver to keep this trade going, and the East India Company was becoming concerned about maintaining its profits.

But then the agents of the East India Company realised they could create a growing market for something that they could source in bulk. While the Chinese government would only consider silver for official trade, the Chinese people were keen on something else: opium.

Opium was legal in Britain in the early 1800s, with British people consuming between 10 and 20 tonnes of the stuff every year. Powdered opium was dissolved in alcohol as a tincture called laudanum, which was freely available as a painkiller and even present in cough medicine for babies. Many late-18th and 19th-century literary figures were influenced by opium, including Lord Byron, Charles Dickens, Elizabeth Barrett Browning, John Keats and Samuel Taylor Coleridge. Thomas De Quincey found fame with his autobiographical Confessions of an English Opium-Eater. Drinking opium in this way produced mild narcotic effects but was also habit-forming – society at this time was therefore pervaded by high-functioning opium addicts, including many among the lower classes who were looking to numb the tedium of working and living in an industrialised urban world. But while laudanum helped inspire a few poets and fuelled bouts of aristocratic debauchery, drinking it delivered a relatively slow release of opiates into the bloodstream.

The Chinese, on the other hand, had taken to smoking opium. This delivers a much more rapid hit, which is consequently far more potent and addictive. The Chinese probably first came across opium smoking in the 17th century in the Dutch colonial outpost in Formosa (Taiwan); the Portuguese then began shipping the drug from their Indian trading hub in Goa to Guangzhou (then known as Canton) in the 18th century. So, although the East India Company didn’t create the initial demand for opium in China, they turbocharged it. They could bank on the key property of addictive substances: once you’ve gained a clientele for your product, you can be assured that your customers will keep coming back.

Instead of sending silver to China, the East India Company trafficked opium – and they could effectively grow as much of this new currency as they needed. Before long, the company was pushing the drug in amounts never seen before. Ultimately, it boiled down to one addiction being traded for another – caffeine for opium – but the British were forcing a far more destructive substance on the Chinese. In order for the English mind to be focused with tea, the Chinese mind was fogged with opium.

The East India Company had seized control of Bengal from the Mughal empire after the Battle of Plassey in 1757. It came to establish a monopoly on opium cultivation in the region and started running the drug into China. Opium consumption for non-medicinal uses was outlawed in China – the first laws banning opium had been enacted in 1729 – and so the East India Company couldn’t be seen to be illegally importing opium, as that would force a response from the emperor. Instead, it used independent “country firms” as middlemen – Indian merchants licensed by the company to trade with China. These firms sold the opium for silver in the Pearl River estuary, where it was then smuggled ashore.

This was a thinly veiled effort by the company to wash their hands of their formal involvement in the trafficking. As historian Michael Greenberg has put it, the East India Company “perfected the technique of growing opium in India and disowning it in China”. Meanwhile, a network of opium distribution spread through China, helped by corrupt officials who’d been paid off to look the other way.

The East India Company readily expanded its pipeline pumping opium into China until, in 1806, the tipping point was reached and the trade deficit had been forcibly reversed. The large numbers of Chinese opium addicts were now collectively paying so much to feed their habit that Britain was making more money from selling opium than it was spending on buying tea. The silver tide had been turned and the precious metal began flowing from China to Britain for the first time. The amount of opium imported into China by the East India Company trebled between 1810 and 1828, and then almost doubled again by 1832, to about 1,500 tonnes every year. The British empire, fuelled in the early days of its expansion across the Atlantic by one addictive plant, tobacco, was now wielding another, the poppy, as a tool of imperial subjugation.

The Opioid High of Empire
By Nikhil Kumar

As China sought to crack down on opium imports, the trade led to wars and, ultimately, “to immense profits for the British Empire for well over a hundred years,” Ghosh writes. Britain’s victory in the mid-19th-century Opium Wars brought the empire control of Hong Kong, compensation for destroyed opium, and the forced legalization of the opium trade in China. Profits also flowed across the pond, as American capitalists, cut off from trade with nearby British colonies following their War of Independence, started to import Turkish opium to China.

This trade, as Ghosh shows, had an outsized impact on today’s world, from the global drug trade to modern India’s economy, where some of today’s economic problems can be traced back to the British opium monopoly. The Indian states where opium production was centered were rapaciously exploited by colonial officials, and they remain among the poorest in the nation today. Furthermore, he writes, many “of the cities that are now pillars of the modern globalised economy—Mumbai, Singapore, Hong Kong and Shanghai—were initially sustained by opium.”

The Deep Roots of Oligarchy
By Priya Satia

Traditional monarchs were corporate bodies in their own right, expressing the body politic in their personal form. From the 16th century, the British monarchy also relied on another kind of corporation, the chartered monopoly company, to pursue interests abroad, including the EIC, Virginia Company, Massachusetts Bay Company, Royal African Company, and Hudson’s Bay Company. Members of such companies enormously influenced the government bureaucracy that began to emerge to address the questions of trade and war that their activities sparked. Finally, with the 1688-89 Glorious Revolution, Parliament, controlled by wealthy elites also involved in such enterprises, dramatically curbed kingly authority; sovereignty migrated from the king’s person to the nascent institutions of the modern state, a new form of continuous public power above the ruler and the ruled.

This state was an oligarchy. The aristocrats who controlled Parliament used it to usurp the common rights of ordinary people, passing thousands of laws privatizing land—an internal colonialism that displaced masses of people, many of whom took their memory of oligarchic injury to North America, where they in turn displaced Indigenous inhabitants. The state also depended on contractors and corporations such as the EIC for the military and financial organization that it still lacked. Corporations’ interests in Asia, Africa, and the Americas were entangled with and dependent on Britain’s diplomatic, military, and political pursuits. Apart from trade and conquest, such companies bought, sold, and leased sovereignty like a commodity, as, for instance, with the EIC’s creation and sale of the princely state of “Jammu and Kashmir” to its allies in its conquest of Punjab.

In short, corporations were the new state’s partners in governance. The British state was Parliament, an emerging fiscal-military bureaucracy, and the Crown, plus corporations (including the Royal Mint and the Bank of England) and private contractors and financiers. In this sense, it, too, had a corporate form. It’s impossible to say where the state ended and the private sphere began.

The Merchant’s Leviathan
By Caroline Elkins

When the East India Company commissioned Richard Greenbury to paint the events that unfolded in 1623 on Amboyna, a clove-producing island in what is now the Indonesian archipelago, the artist’s rendering was so disturbing that it had to be altered. Titled The Atrocities at Amboyna, the painting depicted the torture and beheading of ten English merchants by Dutch agents for allegedly trying to usurp control of the region’s lucrative spice trade.

The Amboyna Massacre, as it was known, became part of the company’s and, by extension, the British Empire’s origin story in Asia. Defeat at Amboyna and the cruel deaths of its merchants steered the company away from the East Indian spice trade and competition with the powerful Dutch monopoly trading company, the Vereenigde Oost-Indische Compagnie, or VOC. Instead, the East India Company turned to cotton and silk textile trading with the Mughal Empire in South Asia, eventually expanding its lucrative trade to commodities such as tea and the illegal trafficking of opium while extending its reach across the Indian subcontinent and into the Persian Gulf, China, and elsewhere in Asia.

The East India Company, arguably the most enduring public-private partnership in history, helped make the British Empire, fuel the industrialization of Europe, and knit together the global economy. It was chief among dozens of other chartered companies granted monopoly rights and sovereign claims through politically issued charters. These companies, the avatars of the crown, merged the pursuit of profit with the prerogatives of governance, although, much like the VOC, they often governed as they pleased, without legally enforceable responsibilities. Importing raw materials, they fed the Industrial Revolution’s manufacturing sector, stimulated demand for foreign products, and dominated London’s burgeoning capital markets while also undertaking the state’s bidding in establishing the British Empire. Chartered companies helped claim nearly a third of the world’s territory for the United Kingdom, making its empire the largest ever known.

Chartered companies often existed beyond the reach of state regulation and in many ways functioned as states in their own right, with their own forms of sovereignty. Their legacy extends to contemporary tensions between hefty multinational technology corporations and the countries that struggle to restrain them.

Born from incessant warfare, the East India Company and the VOC became two of the most powerful corporations the world has ever known. Both were granted chartered monopolies—the East India Company in 1600 from Queen Elizabeth I and the VOC in 1602 from the States General of the recently founded Dutch Republic—and possessed powers typically associated with sovereign states, including the right to build forts, raise armies, mint currency, wage war, sign treaties, and engage in diplomacy with non-European entities.

They were also unmistakably merchant companies on the cutting edge of financial innovation that, when combined with unbridled ambition and the license to wage war, generated incredible wealth, enough to fuel the Dutch Golden Age of the seventeenth century and the British Industrial Revolution in the eighteenth and nineteenth centuries.

The VOC emerged as the first publicly listed company, paying average yearly dividends of 18 percent for almost two centuries. At its height, the VOC was worth the equivalent of around $7.9 trillion in today’s dollars—a figure greater than the combined current valuations of Alibaba, Alphabet, Amazon, Apple, AT&T, Bank of America, Berkshire Hathaway, Chevron, ExxonMobil, Facebook, Johnson & Johnson, McDonald’s, Netflix, Samsung, Tencent, Visa, Walmart, and Wells Fargo.

By the early nineteenth century, the East India Company had a private army of 200,000 (twice the size of the United Kingdom’s), accounted for nearly half the country’s trade, and ruled over the bulk of the Indian subcontinent. Shareholders annually elected merchant-statesmen who crafted and implemented policy, determining the fates of millions who were beholden to the company.

The VOC and the East India Company blurred the lines between public and private in pursuit of profit and power. They were not alone. European states chartered and assisted scores of private companies that, in turn, provided the twin engines of imperial and capitalist expansion around the globe.

English merchants, laying the economic and political foundations of empire, nonetheless relied on state intervention to guarantee their access to markets. Gunboats kept the doors of trade open in, for instance, China during the Opium Wars of the nineteenth century.

Protective tariffs also supported the trading companies’ commerce at home while helping to finance their military activities overseas. Imperial companies contributed handsomely to the state coffers through customs duties, with just five chartered companies, principally the East India Company, accounting for about a third of Great Britain’s customs duties in the mid-eighteenth century.

These companies signed treaties, made territorial claims, subdued and exploited local populations, and lined the pockets of European investors while often operating like states with transnational reach, as opposed to subjects governed by laws and obligations.

As early as the seventeenth century, an English member of parliament described chartered companies as “bloodsuckers of the commonwealth.” Their destructive behavior extended to local populations through famine-inducing measures, exploitative labor policies, and scorched-earth tactics. The economist Adam Smith derided state-sponsored joint-stock companies for their “negligence and profusion,” insisting they were holdovers from a bygone era of capitalist organization that stymied economic growth and gave rise to gross mismanagement and abuse. Burke spearheaded the impeachment trial of Warren Hastings, the East India Company’s governor-general of Bengal, in the House of Lords, outraged not by Great Britain’s imperial ambition but with Hasting’s conduct and that of East India Company officials, whom Burke derided. “School boys without tutors, minors without guardians,” he said at the trial. “The world is let loose upon them, with all its temptations; and they are let loose upon the world, with all the powers that despotism involves.”

Governments and corporations today can and must learn from the global giants that funded and drove Europe’s empires. These companies, launched as capitalist enterprises, rapidly became public-private partnerships, acquiring and maintaining territory while transcending sovereign boundaries and often remaining at arm’s length from the laws of sovereign states. Much like some of today’s multinational corporations, they were masterful at lobbying, bribing, and creating new company structures and forms to avoid state-driven legislation that, more often than not, failed to rein them in.

Whether it be the East India Company or Facebook, multinational companies have often exploited state weaknesses and outright failure. One just needs to cast an eye at today’s global commons—from technological infrastructure and mass digital communication to basic needs such as food and financial credit—to see the ways in which states are failing and dependent on public-private partnerships, often fully outsourcing some responsibilities to private actors.

Government Built Silicon Valley
By Julian E. Zelizer

Much of the development of large mainframe computing systems was born of defense needs. While mainframe systems were being built in the early 1930s, during the war, the U.S. Army and several other defense units developed the Electronic Numerical Integrator and Computer (ENIAC) under the direction of Maj. Gen. Gladeon Barnes. Congress devoted massive resources (today’s equivalent of millions in current) dollars to the construction of what would become the first general-use computer. The most important initial function of ENIAC, which was completed in 1946 by University of Pennsylvania scholars John Mauchly and J. Presper Eckert, was its ability to provide cutting-edge calculations about the trajectories of weapons. Before the project ended, the government discovered ways to use ENIAC for a wide range of jobs, including advanced weather prediction and wind tunnel design. With funding from the Census Bureau, Mauchly and Eckert next worked on the Universal Automatic Computer (UNIVAC), resulting in a digital computer allowing for data processing and storage methods that were new and extremely beneficial to industry. With CBS anchor Walter Cronkite standing by, UNIVAC, which weighed a whopping 16,000 pounds, famously predicted early on election evening in 1952 that Dwight Eisenhower would defeat Adlai Stevenson by a landslide. A computer star was born. The machine would even appear on the cover of a Superman comic book.

The Defense Advanced Research Projects Agency (DARPA), established in 1958, undertook high-risk, large-scale research, cooperating with private firms, that had the potential to produce enormous payoffs. DARPA was central to the Advanced Research Projects Agency Network (ARPANET) in the late 1960s, which constituted the first advanced computer network. Much of the drive for the military had been the desire for a functional network that could survive a nuclear attack. ARPANET was the basis for the modern internet. The National Science Foundation announced a distinct section, called NSFNET, in 1986. The foundation connected five supercomputer centers and granted academic network’s access. The project was considered to have been the “backbone” for the creation of the commercial internet. Other notable computer innovations also grew out of this operation. DARPA dollars facilitated the Stanford Research Institute’s making of the mouse, a technology that made it easier for an individual without great technical expertise to interface with computers. In 1991, Congress passed the High-Performance Computing Act—legislation that Gore helped move—which funded a team of programmers at the University of Illinois’s National Center for Supercomputing Applications that helped vastly expand the internet. Marc Andreessen, one of the engineers who co-created Mosaic and Netscape, acknowledged in 2000, “If it had been left to private industry, it wouldn’t have happened, at least, not until years later.”

The federal government has helped high tech in many other ways besides policies directly related to computers and the internet. Immigration reforms, for instance, that opened the doors to high-skilled foreign-born immigrants resulted in the arrival of people who helped build the computing products that the entire world depends on today. The Hebrew Immigrant Aid Society helped a young Sergey Brin and his family obtain a visa to emigrate from the Soviet Union in 1979. With that, Google was born. Musk was able to finish his education at the University of Pennsylvania with a student visa and stay in the United States because of an H1-B visa. Yahoo co-founder Jerry Yang immigrated with his family from Taiwan in 1978. The Small Business Investment Incentive Act (1980) provided valuable dollars to Silicon Valley firms as they struggled to make a name for themselves.

Indeed, Musk’s company Tesla benefited from government assistance. In 2009, a critical moment for the company, Tesla received $465 million in low-interest loans from the U.S. Energy Department that it used to construct the Model S. Electric vehicle tax credits have grown consumer demand for his and other vehicles. Federal research grants played a role in the different components that make up these cars.

Elon Musk Is Corrupting More Than Just Government
By Tim Wu

Silicon Valley once prided itself, often with justification, on its libertarian, countercultural, do-gooder ethos. Thanks to figures such as Mr. Musk, Peter Thiel and Marc Andreessen, the industry is now on a trajectory to become all the things it once claimed to hate: too big and too dependent on government largess — and a threat to human liberty.

One can romanticize the California computing culture of the 1960s and ’70s, but it was a significant shift from the stodgy, establishment world of companies like IBM and AT&T. As promulgated by counterculture mavericks like Stewart Brand, the original Silicon Valley ideology sought to combine the business of computing with the enlightened pursuit of human progress. As Fred Turner, a historian of the movement, characterized its aspiration, “What the communes failed to accomplish, the computers would complete.”

It makes sense, given the industry’s aversion to rules and conformity, that Silicon Valley long kept a distance from Washington. The Electronic Frontier Foundation, a nonprofit founded to defend the tech industry’s ideals, deliberately avoided having a Washington headquarters.

The first important Silicon Valley figure to proudly depart from the industry ethos was Mr. Thiel, the billionaire investor who helped found PayPal and who has always preferred the Übermensch to the underdog. “Zero to One,” a book on start-ups that he published in 2014, called on every company to seek a monopoly to call its own, insisting that “competition is for losers.” He co-founded a data-analytics company, Palantir, whose main customers have been the National Security Agency, the F.B.I. and the C.I.A. He was the first high-profile tech mogul to endorse Mr. Trump for president, which he did in 2016 — and presumably the first to openly question the merits of democracy, which he did in an essay in 2009.

If everyone is going to think you are big and evil anyhow, you might as well ditch the noblesse oblige and embrace being big and evil.

This shift in Silicon Valley’s attitude is most pronounced in its sudden fondness for military money. Mr. Musk and Mr. Thiel are military contractors, with Mr. Musk and his companies having received approximately $38 billion in public funds. Palantir also earns most of its income from government: Its $200 billion valuation on a mere $3 billion in revenue reflects a widespread impression that Mr. Thiel is a government favorite.

Silicon Valley venture capital has also become interested in military money, and here it is Mr. Andreessen who serves as a bellwether of change. He began his career in the 1990s as an underdog founder whose company, Netscape, was crushed by the mighty Microsoft, and he spent most of his career investing in idiosyncratic start-ups. But in recent years his investment company has poured money into military technology. At the same time, Mr. Andreessen — who supported Mr. Trump for president in 2024 after years of supporting Democrats — has become the evangelist of techno-optimism, a philosophy described by the Wired columnist Steve Levy as “an over-the-top declaration of humanity’s destiny as a tech-empowered superspecies.”

Like characters in an Ayn Rand novel, Mr. Musk, Mr. Thiel and Mr. Andreessen may believe that they have torn off the blinders of convention to seize the greatness they deserve. But their approach may actually weaken Silicon Valley in the long term. The industry’s embrace of government power and money threatens what has made it an engine of innovation and a magnet for creative talent. Most government-funded monopolies grow fat and lazy; today’s dynamic aerospace company becomes tomorrow’s Boeing. A culture of risk taking and innovative product design is less valuable than knowing where to donate money when it comes to landing large government contracts.

The New Empires of the Internet Age
By Daniel W. Drezner

Farrell and Newman’s Underground Empire builds on the pair’s pathbreaking 2019 article in International Security, “Weaponized Interdependence: How Global Economic Networks Shape State Coercion.” In that paper, Farrell, a professor at the Johns Hopkins School of Advanced International Studies, and Newman, a professor at Georgetown University, posited that, across many economic sectors, globalization generated a network structure that concentrated power in a few central nodes. States that controlled those nodes—such as the United States—could exercise considerable global influence.

Underground Empire reminds readers of the article’s core argument. In their introduction, the authors write, “The global economy relied on a preconstructed system of tunnels and conduits that the United States could move into and adapt, nearly as easily as if they had been custom-designed by a military engineer for that purpose. By seizing control of key intersections, the U.S. government could secretly listen to what adversaries were saying to each other or freeze them out of the global financial system.”

Underground Empire expands far beyond this point, however. There is an interesting discussion, for one, of how the private sector helped create this centralized world and the conditions under which multinational corporations have been willing extensions of federal power: “Entrepreneur after entrepreneur discovered that the best way to turn a profit in a decentralized economy was to figure out ways to centralize parts of it again.”

Bradford’s Digital Empires similarly builds on earlier work. Bradford draws from her 2020 book, The Brussels Effect, which argues that the EU’s combination of market power and technocratic capacity made it a superpower in issue areas where Europeans preferred stringent regulatory standards. In Digital Empires, Bradford, a professor at Columbia Law School, contends that there are three great powers when it comes to online technology—and each of them offers a different variety of digital capitalism. As she describes it, “the US has pioneered a largely market-driven model, China a state-driven model, and the EU a rights-driven model.”

These varying approaches lead to horizontal clashes between the United States, China, and the EU on regulatory and technological issues such as data privacy and content moderation. Divergent digital preferences also create vertical clashes between these governments and the tech companies supplying digital infrastructure and services. The Chinese state cowed its firms into greater compliance with government dictates; U.S. firms have been more willing to fight the federal government on questions of data privacy.

According to Digital Empires, the laissez-faire market-based U.S. model of tech governance is losing its appeal both at home and abroad. Domestically, both major political parties have soured on Big Tech, albeit for different reasons. Democrats distrust the corporate concentration of Big Tech, while Republicans are convinced that content moderation has an anti-conservative bias. Globally, concerns about data privacy have made life more difficult for the Big Five tech firms—Amazon, Apple, Google, Meta, and Microsoft.

As the U.S. model loses its luster, Bradford posits that the United States will align more closely with the EU against China’s more authoritarian model of digital governance. For Bradford, therefore, what matters is not control over critical nodes but control over critical markets.

The definitions of digital power proffered in Underground Empire and Digital Empires are not mutually exclusive. It can be simultaneously true that the United States retains considerable structural power and the EU exercises its market power adroitly—and all the while China tries to amass both forms of power. Still, if there are arenas of contestation where U.S., European, and Chinese officials disagree, whose form of power might prevail?

The Next AI Debate Is About Geopolitics
By Jared Cohen

Where industrial revolutions happen can reshape global affairs. Britain’s Industrial Revolution made London the center of an empire upon which the sun never set. The digital age took off in Silicon Valley, making the United States home to world-leading technology companies. But if AI leads to the next industrial revolution, that revolution will have been global from the beginning.

AI is a general-purpose technology. But unlike previous general-purpose technologies, such as electricity or steam engines, AI-enabled tools proliferated so quickly that cutting-edge innovations became widely available almost overnight, in the form of chatbots, image generators, and—increasingly—virtual co-pilots. The AI industry also depends on a network of global commercial partners, including not only U.S. and Chinese technologies, but also Taiwan’s semiconductor fabrication plants, extreme ultraviolet lithography machines made in the Netherlands, and other critical supply chain inputs. Competition over AI has so far been dominated by debates about leading-edge semiconductors, but the next phase is also about geography and power.

Data centers are the factories of AI, turning energy and data into intelligence. … The countries that work with companies to host data centers running AI workloads gain economic, political, and technological advantages and leverage. But data centers also present national security sensitivities, given that they often house high-end, export-controlled semiconductors and governments, businesses, and everyday users send some of their most sensitive information through them.

Data is sometimes called the “new oil.” But there’s a crucial difference when it comes to data centers. Nature determines where the world’s oil reserves are, yet nations decide where to build data centers.

Data centers can be the targets of cyberthreats and espionage, especially over financial and national security data. Events such as the 1973 Arab oil embargo or the COVID-19 pandemic heighten concerns about relying on one, or even a small number, of foreign partners for critical resources, including data. A geopolitical conflict or natural disaster in a hotspot such as Taiwan could disrupt the world’s access to semiconductors and reduce the world’s ability to add new computing capacity. The chips and cables that connect GPUs are made of critical materials such as germanium and gallium, which have been the subject of export controls and bans by China.

Revolutionary technologies have made new locations critical for geopolitics throughout history. In the 19th century, the railroads connected the coasts of the United States and opened the resource-rich Eurasian heartland to competition between empires. In the 20th, telecommunications networks made it possible to send information instantly around the globe.

The data center buildout puts geography at the center of technological progress and competition.

Contributor: It’s no metaphor — undersea cables hold together our precarious modern life
By Colum McCann

It is estimated that more than 95% of the world’s intercontinental information travels through underwater cables that are no bigger than the pipes at the back of your toilet. Within those cables there are tiny strands of fiber optic material, the width of an eyelash. The 500-plus working data cables in the world carry not only our emails and phone calls but also the majority of the world’s financial transactions, estimated to be worth $10 trillion a day.

Fishing trawlers can snag a wire. Dropped anchors from cruise ships can exact damage. An underwater earthquake or a landslide can snap the cable deep in the abyssal zone. Or, as has happened increasingly in the last year, they can be sabotaged by state actors and terrorists bent on disrupting the political, social and financial rhythms of an already turbulent world.

Cables — often several of them bunched together — come into our shores via landing stations. These are essentially coastline buildings, in suburban areas. They appear like low-slung windowless bungalows. The landing stations generally have minimal security. Even in the New York area, the landing stations are protected by little more than a camera and sometimes a chain-link fence. During the pandemic, I was able to access a Long Island landing station and stand directly above the manhole cover where the cables came from across the Atlantic. With a crowbar I could have reached down and touched them, felt the pulse of the world’s information traveling through my fingertips.

But sabotage on a small level is never going to disrupt our vast information flow. One of the beauties of the internet is that it is self-healing, meaning that information, when blocked, just travels in a new direction. But a coordinated series of attacks on the landing stations, combined with some low-level sabotage at sea (an ingenious diver can fairly easily manage to cut a cable), augmented by some deep-sea sabotage (the severing of cables using ropes and cutting grapnels lowered from boats), could, in fact, bring the world economies to a screeching halt.

The AI Economy’s Massive Vulnerability
By Jared Cohen

In one dire incident in 2008, the severing of two Mediterranean cables shut down internet access in 14 countries, including 100 percent of connections in the Maldives, and 82 percent in India. What makes today’s risks even more vexing than they were then are questions of scale and significance. One-off cable cuts are relatively straightforward to fix. But if cables are damaged at particularly sensitive times or places, or on a wider geographic scale, then that task is much more challenging.

Meanwhile, subsea cables are growing even more necessary to daily life as artificial intelligence increases demands for global data transmission, including through overseas data centers connected to the global cable networks.

Hundreds of billions of dollars are being invested in AI infrastructure every year. But that infrastructure is often connected through one of the thinnest, and potentially most vulnerable, physical networks on the planet.

The individual fiber optic strands that, combined, make up the subsea cables through which world’s information flows are often barely thicker than a human hair. And that delicate infrastructure predates the digital age. The first undersea cables made trans-Atlantic telegraph communication possible in the 1850s, and through the technological revolutions since, they’ve connected telephones, modern telecommunications devices, and now the global AI infrastructure.

As world powers vie for regional influence and partnerships, subsea cable infrastructure contracts have become commercial contests for spheres of geopolitical influence.

The Shifting Geopolitics of AI
By Ravi Agrawal

JC: … If you’re a geopolitical swing state reliant on critical components controlled by the United States, then the United States has a big say on how much you can swing. And because of export controls on high-end chips, the United States has forced the hand of countries that initially wanted to build a cocktail technological ecosystem of Chinese and American technology. When it came to advanced technology, they had to make a binary choice.

The concept of a geopolitical swing state represents a new moment in history. These countries have certain attributes that position them to take advantage of the tension between the United States and China and achieve enormous mobility, which they will maximize to their own interests. The swings that they make or don’t make will be in service of clearing geopolitical brush to maximize their interest as well as their usefulness to both countries. But there are some issues where maximizing their usefulness exclusively to the United States serves their interests because the United States is the only country that has the 2-nanometer chips that they need.

Weaponized Plumbing: On Henry Farrell and Abraham L. Newman’s “Underground Empire”
By Krzysztof Pelc

Britain’s takeover of India was perpetrated not by the British crown annexing territory, but by a private firm, the British East India Company, which was run out of a small London office for the benefit of its shareholders. Its Dutch analogue, the Verenigde Oostindische Compagnie, laid the foundation for the Dutch colonial empire in Indonesia. In North America, it was fur traders who initially set up the routes and trading outposts that later became the basis for colonial claims.

Letting private merchants chart the path to future spoils suits the world’s sovereigns just fine. Ledgers seem less threatening than scepters, which is what makes them such effective tools of empire. In Underground Empire: How America Weaponized the World Economy (2023), political scientists Henry Farrell and Abraham L. Newman’s novel account of our digital economy, merchants once more function as handmaidens to government power. The beaver pelt peddlers and colonial profiteers of yore are now early telecoms like MCI WorldCom, which laid the fiber-optic cables that the original internet ran on, and banks like J.P. Morgan & Co. and Citibank, which dreamed of a borderless global payments system.

What’s most striking is that both the telecoms and the banks, in Farrell and Newman’s account, were led by libertarian-minded characters set on creating systems that would evade political oversight. Yet the opposite happened: internet networks, financial infrastructures, and global supply chains initially set up by private firms ended up being weaponized by governments—one government in particular. How to explain this outcome?

The first reason is easily overlooked: the modern digital economy is constituted of tangible matter. Every selfie, cat video, and viral meme—and, as it turns out, every covert communication between criminal kingpins or the central bank dealings of a rogue regime—gets stored somewhere on a physical server. The cloud, in other words, does not live in the clouds. It sits in climate-controlled data centers, large numbers of which are clustered in otherwise unremarkable single-factory towns where the factory is the internet.

As it turns out, a disproportionate share of those data centers is found on American soil, in innocuous-sounding places like Ashburn, Virginia. This overlooked physical reality, Farrell and Newman claim, means that beyond its sheer size, its nuclear arsenal, or even the soft power of its cultural industries, the United States now wields a novel source of influence, which has only become apparent in the last two decades. Stranger still, this power came about largely by happenstance. The internet emerged out of a US government project based in Northern Virginia, devised to facilitate collaboration among university researchers.

The turning point came in the wake of the September 11 attacks, as US intelligence agencies grasped for ways to intercept the private communications of suspected terrorists and hostile regimes. They resorted to turning the screw on the merchants. National telecom companies like Verizon/MCI eventually granted the NSA access to all their international cable traffic. And because the internet converged on a handful of cable landing sites on US territory—which the NSA itself refers to in its classified documents as “choke points”—they represented a unique portal onto the world’s private communications. It was the type of power that can’t be reined in once unboxed. Privacy laws were quickly updated to allow for a broad harvesting of these data in the name of national security. The world entered a new era of government surveillance; the path had been laid by private companies.

Intelligence agencies were not the only ones to awaken to their newfound superpowers; the US Treasury Department had its own slow-moving epiphany. Until that point, Treasury had viewed any political interference in financial markets as threatening the United States’ credibility. But when it came to light that the September 11 hijackers had been sent money using regular wire transfers that passed through the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the system that the world’s banks use to speak to one another, Treasury abruptly abandoned its reservations. It demanded access to SWIFT’s data, and although similar requests had been denied in the past, this time SWIFT gave in. The United States suddenly gained an unprecedented real-time look into clandestine financial transactions.

It wasn’t just information. Less than a year later, in response to suspicions that Iran was relying on SWIFT to finance its nuclear program, American officials pushed for Iranian banks to be cut off from the global financial switchboard. SWIFT yielded to pressure once more. The effects were immediate. Since Iran is heavily reliant on exports of oil, which is traded in dollars, it could no longer collect payments for its goods, and was reduced to bartering. Its exports dropped by 75 percent. This was the turning point in the negotiations over its nuclear program: in exchange for lifting the SWIFT ban, Iran began making concessions. A deal was eventually reached.

To its own surprise, the Obama administration proved unable to lift the sanctions it had imposed. Even once it allowed European banks to restart their dealings with Iran, the banks balked out of fear that the United States might change its mind. In the chummy world of global finance, reputation is everything, and a black mark is not easily erased. The difficulty of reversing course is a recurrent aspect of economic sanctions, and one that arguably limits their usefulness as bargaining chips. Ultimately, the bankers were proven right. The Trump administration soon broke the Iran deal and reimposed sanctions.

It’s because the stage was set by merchants single-mindedly bent on profits that no one suspected its eventual weaponization by governments. Had they seen ahead, US friends and foes alike would have resisted the US-led digital economy in much the way that the United States today can be seen clutching its pearls over TikTok, a social media platform that, while Chinese-owned, remains best known for 15-second dance-move videos posted by teenagers. The modern digital economy emerged because no one could see ahead to the influence that it would eventually offer some governments over others.

In their determination to paint governments as the great beneficiaries of the modern digital economy, Farrell and Newman occasionally come close to overstating their case. In particular, they speak of how multinational companies are now “caged by the demands of jealous states” aiming to reassert their authority over the private sector. Yet the relationship may be more of a quid pro quo. In 2011, the NSA’s secret budget included $394 million earmarked for payments to private companies, like Verizon/MCI, in exchange for data access.

SWIFT arose in the 1970s in response to an attempt by the US company Citibank to impose its own payments system on the rest of the world. It was, in other words, a response by European merchants to American overreach; the European VHS to Citibank’s Betamax. VHS won out, and Belgium was picked for SWIFT’s headquarters, a deliberate choice to counter American financial hegemony. This is what happens when others see ahead and try to hedge against the overconcentration of power in a single node. That is why the United States could not initially subpoena SWIFT in the way they did domestic firms.

When the US eventually got its way in the wake of September 11, it was through a diplomatic marriage of convenience. EU privacy laws prevent European governments from harvesting data indiscriminately; the United States faces fewer restrictions. So, in exchange for obtaining access to SWIFT data, US intelligence agreed to share actionable intelligence with their European counterparts.

If the weaponization of the digital economy is surprising, it’s because networks are not meant to be dominated by a few powerful players. The very selling point of networks lies in their decentralized nature, which promises greater resilience, and a more equitable distribution of power. Think of the initial utopian design behind the crypto economy: a vast decentralized network, immune to government interference, where relationships of power are replaced by incorruptible code.

Yet the theoretical beauty of networks can quickly be undermined by human incentives.

Crypto is a case in point. From that early utopian ideal, the crypto economy has grown into an oligopoly ruled by a handful of big intermediaries, each one an obvious target for regulation. As recent events have shown, the collapse of a single one of these large firms can jeopardize the whole system.

Network power relies on others buying in. For Instagram to exist, people must keep posting pictures of what they ate for brunch. For the US dollar to remain a global currency that puts US banks in charge, or for US-based data centers to hold the world’s secrets, the world must be willing to keep using dollars to settle their transactions from lira to yen, and keep storing their private data in places like Ashburn, Virginia.

100 Days. That’s All It Took to Sever America From the World.
By Ben Rhodes

In 1941, as President Franklin D. Roosevelt marshaled support for the fight against fascism, his chief antagonists were isolationists at home. “What I seek to convey,” he said at the beginning of an address to Congress, “is the historic truth that the United States as a nation has at all times maintained clear, definite opposition to any attempt to lock us in behind an ancient Chinese wall while the procession of civilization went past.” Roosevelt prevailed, and that victory expanded America’s relationship with the world in ways that remade both.

Eighty-four years later, President Trump is systematically severing America from the globe. This is not simply a shift in foreign policy. It is a divorce so comprehensive that it makes Britain’s exit from the European Union look modest by comparison.

In the short term, treating international relations like a protection racket could yield some bilateral transactions. Yet something more fundamental is being lost: trust. An America that, for all its mistakes abroad, guaranteed the security of its allies. An America that, for all its nativism, took in refugees and educated countless world leaders through its universities and exchange programs. An America that, for all its hubris, responded to humanitarian crises and showcased an appealing cultural openness. An America that people around the world liked more than its government.

The destruction of that trust will hurt us more than the rest of the world.

How America Blew Its Unipolar Moment
By Max Bergmann

International relations scholar G. John Ikenberry argued in his 2001 book, After Victory, that America needed to embrace its enlightened self-interest and to accept some constraints on its power in order to “lock in a favorable postwar order.” By showing strategic restraint, the United States was more likely to “gain the acquiescence of weaker” states as well as prepare for the day when the unipolar moment ended.

The Roosevelt administration was determined not to repeat the mistakes of the interwar period, when Washington rejected the League of Nations and enabled beggar-thy-neighbor economic policies. Before the war was even won, talks took place in 1944 at Dumbarton Oaks in Washington that led to the United Nations and at Bretton Woods in New Hampshire that established the postwar economic order. When the Soviets turned from allies to adversaries and a Cold War emerged, America’s isolationist tendencies were pushed aside, as Presidents Harry Truman and Dwight D. Eisenhower entered into alliances in Europe and Asia, provided massive military and development assistance, and insisted on the integration of Europe.

Yet after victory in the Cold War, there was no comparable U.S. effort to transform the international institutional order along the lines that Ikenberry proposed. There was no effort to dramatically bolster the United Nations, reform the Security Council, or create new robust institutions. Unable to ratify international agreements in the Senate, the United States stood on the outside as treaties including the Convention on the Law of the Sea; the Rome Statute, to establish the International Criminal Court; the Comprehensive Nuclear-Test-Ban Treaty; and the Kyoto Protocol on climate advanced. Sen. Jesse Helms led efforts to withhold U.S. funding for the United Nations, despite the fact that tens of thousands of U.N. peacekeeping forces were increasingly being deployed around the world to keep a lid on conflicts. The global institutions that were formed—such as the Community of Democracies established at the end of the Clinton administration, intended to link and organize democracies around the world—were quickly neglected by the United States. The most significant development in the global political architecture didn’t involve the United States at all, as it came at the regional level, with the formation of the European Union, Mercosur, and the African Union.

After 9/11, the United States was given another opportunity to remake the world. But instead, U.S. unilateralism was unleashed. The 1990s saw the ascendance of neoconservatives who broadly shared the liberal goals of Wilsonian internationalists but believed the way to do so was unilaterally through U.S. hard power. As Robert Kagan and William Kristol wrote in an influential 1996 essay arguing for a neo-Reaganite foreign policy, the “appropriate goal of American foreign policy, therefore, is to preserve that hegemony as far into the future as possible.” They called for more defense spending and more muscular confrontation with adversarial regimes. The global war on terrorism, drone campaigns, and the invasion of Iraq served to make a mockery of the conceptions of a rules-based international order and greatly eroded global trust in U.S. hegemony, creating room for rivals to emerge and push back.

Drip by drip, this took a toll on America’s standing. It sapped U.S. credibility in multilateral forums, where U.S. hypocrisy was used as a cudgel and prompted the United States to engage less. The U.N. is barely ever discussed in Washington today. But as the United States paid less attention, China paid more, making global institutions a tough way to advance a more liberal world. There are now few major efforts to forge an international agreement on cyber, space, or new forms of weapons systems.

The Self-Liquidation of U.S Global Leadership
By Alfred McCoy

Foreign aid — giving away money to help other nations develop their economies — remains one of America’s greatest inventions. In the aftermath of World War II, Europe had been ravaged by six years of warfare, including the dropping of 2,453,000 tons of Allied bombs on its cities, after which the rubble was raked thanks to merciless ground combat that killed 40 million people and left millions more at the edge of starvation.

Speaking before a crowd of 15,000 packed into Harvard Yard for commencement in June 1947, less than two years after that war ended, Secretary of State George Marshall made an historic proposal that would win him the Nobel Peace Prize. “It is logical,” he said, “that the United States should do whatever it is able to do to assist in the return of normal economic health in the world, without which there can be no political stability and no assured peace.” Instead of the usual victor’s demand for reparations or revenge, the U.S. gave Europe, including its defeated Axis powers, $13 billion in foreign aid that would, within a decade, launch that ruined continent on a path toward unprecedented prosperity.

What came to be known as the Marshall Plan was such a brilliant success that Washington decided to apply the idea on a global scale. Over the next quarter century, as a third of humanity emerged from the immiseration of colonial rule in Africa and Asia, the U.S. launched aid programs designed to develop the fundamentals of nationhood denied to those countries during the imperial age. Under the leadership of President John F. Kennedy, who had campaigned on a promise to aid Africa’s recovery from colonial rule, disparate programs were consolidated into the U.S. Agency for International Development (U.S.A.I.D.) in 1961.

… U.S.A.I.D. and similar agencies collaborated with the U.N.’s World Health Organization (WHO) to eradicate smallpox and radically reduce polio, ending pandemics that had been the scourge of humanity for centuries. Launched in 1988, the anti-polio campaign, the Centers for Disease Control and Prevention estimates, spared 20 million children worldwide from serious paralysis.

Behind such seemingly simple statistics, however, lay years of work by skilled U.S.A.I.D. specialists in agriculture, nutrition, public health, sanitation, and governance who delivered a multifaceted array of programs with exceptional efficiency. Not only would their work improve or save millions of lives, but they would also be winning loyal allies for America at a time of rising global competition.

Enter Elon Musk, chainsaw in hand. Following President Trump’s example of withdrawing from the World Health Organization on inauguration day, Musk started his demolition of the federal government by, as he put it, “feeding U.S.A.I.D. into the wood chipper.”

With head-spinning speed, his minions then stripped the U.S.A.I.D. logo from its federal building, shut down its website, purged its 10,000 employees, and started slashing its $40 billion budget for delivering aid to more than 100 nations globally.

… the sharp cuts to U.S.A.I.D.’s humanitarian programs represent a crippling blow to America’s soft power at a time when great-power competition with Beijing and Moscow has reemerged with stunning intensity.

Last February, only a week after Washington cancelled $40 million that had funded U.S.A.I.D. initiatives for child literacy and nutrition in Cambodia, Beijing offered support for strikingly similar programs, and its ambassador to Phnom Penh said, “Children are the future of the country and the nation.” Making China’s diplomatic gains obvious, he added: “We should care for the healthy growth of children together.” Asked about this apparent setback during congressional hearings, Trump’s interim U.S.A.I.D. deputy administrator, Pete Marocco, evidently oblivious to the seriousness of U.S.-China competition in the South China Sea, simply dismissed its significance out of hand.

Although the dollar amount was relatively small, the symbolism of such aid programs for children gave China a sudden edge in a serious geopolitical rivalry. Just two months later, Cambodia’s prime minister opened new China-funded facilities at his country’s Ream Naval Base, giving Beijing’s warships preferential access to a strategic port adjacent to the South China Sea. Although the U.S. has spent a billion dollars courting Cambodia over the past quarter-century, China’s soft-power gains are now clearly having very real hard-power consequences.

The Age of American Unilateralism
By Michael Beckley

One reason the United States is going rogue is because it can. Despite decades of declinist warnings, American power remains formidable. The country’s consumer market rivals the combined size of the markets in China and the eurozone. Half of global trade and nearly 90 percent of international financial transactions are conducted in dollars, funneled through U.S.-linked banks—giving Washington the power to impose crippling sanctions. Yet the United States has one of the least trade-dependent economies in the world: exports account for just 11 percent of GDP (a third of which go to Canada and Mexico) compared with the global average of 30 percent. U.S. firms supply half of global venture capital, dominate the production of life necessities such as energy and food, and generate more than half of global profits in high-tech industries, including semiconductors, aerospace, and biotechnology—nearly ten times China’s share. The United States relies on China for high-volume industrial inputs—base chemicals, generic drugs, rare earths, and low-end chips—but China is far more dependent on the United States and its allies for high-end technologies and food and energy security. Both sides would suffer in a rupture, but China’s losses would be harder to replace.

Militarily, the United States is the only country that can fight major wars thousands of miles from its shores. Roughly 70 countries—accounting for a fifth of the world’s population and a third of its economic output—depend on U.S. protection through defense pacts and require U.S. intelligence and logistics to move their own forces beyond their borders. In a world so deeply dependent on the U.S. market and military, Washington has immense leverage to revise the rules—or abandon them altogether.

Drones, long-range bombers, cyberweapons, submarines, and precision missiles potentially allow the United States to strike targets across the globe while relying less on large, permanent overseas bases—which are increasingly vulnerable to adversaries armed with similar technologies. As a result, the U.S. military is shifting from a force geared to protect allies to one focused on punishing enemies by launching strikes from U.S. territory, deploying automated kill zones of drones and mines near adversaries’ borders, and sending nimble expeditionary units to hit high-value targets and slip away before taking casualties. The goal is no longer deterrence through presence—it’s destruction from a distance.

This same logic is reshaping the U.S. economy. Automation and AI are shrinking the demand for foreign labor. Additive manufacturing, or 3D printing, and smart logistics are compressing supply chains and enabling reshoring. AI is replacing foreign call centers. With increasingly automated factories, cheap energy, and the world’s biggest consumer market, U.S. firms are coming home—not just for security but because it makes business sense. The United States’ reliance on the global economy won’t vanish, but it’s becoming narrower, more selective—and easier to sever when the next global crisis hits. A fortress economy is rising to match a fortress military. And together, they are making disengagement feel both safer and smarter.

How to Ruin a Country
By Stephen M. Walt

International politics is inherently competitive, which is why states are better off with lots of mostly friendly partners and relatively few enemies. A successful foreign policy, therefore, is one that maximizes the support you get from others and minimizes the number of opponents you face. Aided by highly favorable geography, the United States has been remarkably successful at getting support from consequential allies in other parts of the world, and it was much better at doing this than most of its adversaries. A key ingredient of that success was not acting overly aggressive or belligerent, even while exercising enormous influence. By contrast, Wilhelmine Germany, the Soviet Union, Maoist China, Libya, and Iraq under Saddam Hussein all adopted bellicose and threatening behavior that encouraged their neighbors and others to join forces against them. All great powers play hardball on occasion, but a smart great power wraps its mailed fist in a velvet glove, so as not to provoke unnecessary opposition.

What is Trump doing instead? In less than three months, the Trump administration has repeatedly insulted our European allies; threatened to seize territory belonging to one of them (Denmark); and picked needless fights with Colombia, Mexico, Canada, and several other countries. Trump and Vice President J.D. Vance have publicly bullied Ukrainian President Volodymyr Zelensky in the Oval Office and, like Mafia bosses, keep trying to coerce Ukraine into signing over mineral rights in exchange for continued U.S. assistance. With great fanfare, the administration has dismantled the U.S. Agency for International Development, withdrawn from the World Health Organization, and made it abundantly clear that the government of the world’s largest economy is no longer interested in helping less fortunate societies. Can you think of a better way to make China look good by comparison?

And then last week, Trump blithely ignored repeated warnings from economists from across the political spectrum and imposed a bunch of bizarrely constructed tariffs on a long list of allies and adversaries. Wall Street’s verdict on Trump’s ignorant decision was instantaneous—the largest two-day plunge in the stock market in U.S. history—as forecasts of a recession soared. This harebrained decision wasn’t a response to an emergency or forced upon the country by others; it was a self-inflicted wound that will make millions of Americans poorer, even if they don’t own a single share of stock.

The geopolitical consequences will be no less significant. Some states are already retaliating in kind—further raising the risks of a global recession—but even countries that don’t hit back will try to reduce their reliance on the American market and begin pursuing mutual beneficial trade arrangements without the United States. And as I noted in my last column, starting a trade war with our Asian allies is at odds with the administration’s stated desire to compete with China.

Tearing up the rules or withdrawing from key international organizations just makes it easier for other states to rewrite the rules in ways that favor them.

Much Maligned But Still Necessary: the U.N. at 75
By Michael Hirsh

Washington’s animosity toward its own postwar creation, the United Nations, has a long history extending to Democratic as well as Republican administrations.

The General Assembly has been little more than a talking shop for most of its history. The U.N. Security Council—designed by Washington to be a great-power-dominated enforcer of world peace—is all but paralyzed by worsening hostility between the U.S., China and Russia, each of which veto resolution after resolution against each other.

Most experts and diplomats believe Trump is being foolish in snubbing the United Nations—and leaving it open to Chinese influence—because for all its flaws it still critically serves U.S. interests. Indeed, when viewed over its entire long history, the United Nations has mostly realized its larger purpose: preventing a third world war, argues Stephen Schlesinger, author of a 2004 book about the U.N.’s founding, Act of Creation. “The U.N. played a direct role in resolving the biggest and most dangerous confrontation we’ve had in the last 75 years, the Cuban Missile Crisis.” This is true: Then-U.N. Ambassador Stevenson had a critical part in averting nuclear war between the United States and the Soviet Union when he famously confronted the Soviet ambassador in the Security Council, telling him, “You are in the court of world opinion right now.”

Schlesinger counts about 30 cases in its history where the United Nations played a peacemaking role in preventing a regional or local conflict that could have threatened a wider war—some that easily might have drawn in Washington. These include crises in Cambodia, Mozambique, Guatemala, Angola, Serbia, South Africa, Namibia and Croatia.

Yet few of these events garnered major headlines. “The problem is the U.N. failures tend to be dramatic and the successes tend to be mundane,” said Richard Gowan of the International Crisis Group, referring to past blue helmet disasters such as the Rwandan genocide and the 1995 slaughter at Srebrenica in Bosnia.

The United Nations played an essential role in the Paris climate pact and the Iran nuclear deal—both of which Trump has rejected … . And the Security Council has been necessary to organizing broad alliance support in U.S.-led missions such as the Gulf War. It even became important in supporting George W. Bush’s early enforcement actions against Saddam Hussein in 2002, before he went beyond his U.N. remit and invaded. The U.N. has created critical institutions from the International Atomic Energy Agency—which monitors the largely successful Nuclear Non-Proliferation Treaty signed by 191 states—to the World Food Program.

Though the United Nations was conceived at the Dumbarton Oaks estate in Washington D.C. and inaugurated in San Francisco—then plunked down in the middle of New York City—it has long been regarded as an alien entity by many Americans who see it as a threat to U.S. sovereignty.

Yet for most of its existence, when it comes to the really big issues, the United Nations has acted more as an adjunct to U.S. power than an obstructionist force.

And far from threatening world government, the United Nations has never gone to war in the name of collective security unless the United States has orchestrated such military action. And this has happened just twice in its history—Korea in the early ‘50s, and the Gulf War in 1990-91.

For all its flaws and failures, the U.N. Security Council is also necessary if the United States is to impose its influence as the lone superpower without too much in-your-face unilateralism, and minimizing any commitment of U.S. forces. U.N. legitimation gives foreign leaders the face they need to sign onto U.S. initiatives. As former U.S. general and NATO commander Wesley Clark—who led the multilateral effort in the Balkans in the 1990s—once said, The role of the U.N. Security Council “is huge because it enables your friends to do what you want them to do in their own domestic politics,” and they don’t have to admit to bowing to U.S. pressure. The United Nations also acts as a clearinghouse for an endless slew of aid projects that Washington has no interest in orchestrating. In war-torn Afghanistan in the winter of 2001-2002, for example, it was the World Food Program that managed to get food—much of it supplied by U.S. aid—to hard-to-reach regions as the war was still going on. Earning almost no headlines, the WFP averted a famine.

And in the end, most presidents come to see the U.N.’s broader role—the vast substructure of stability it provides to the global system.

The new fight to reform the UN’s colonial-era world order
By Tara John

The UN Security Council has been dominated by just five countries (the United States, China, Russia, France, and the United Kingdom) since its inception from the ashes of World War II, when much of the world was still under colonial rule.

Today, countries around the world get to take turns in the council as non-permanent members, but no country in the Middle East, Africa, Latin America or the Caribbean has the permanent members’ crucial veto power.

The veto allows permanent members, known as the P5, to block any resolution, ranging from peacekeeping missions to sanctions, in defense of their national interests and foreign policy decisions.

Deep divisions among the permanent members have led to growing frustration with the Security Council’s inability to stem the world’s biggest problems, from bloody conflicts in Gaza and Ukraine, to the threat of nuclear weapons and climate change.

“US and Russia often exercise their veto to either protect a client state, in the case of Israel or Syria, or to protect their own national interests, as in the case with Russia vetoing on Ukraine,” Anjali Dayal, a UN expert and an international politics assistant professor at Fordham University, told CNN.

France and the United Kingdom have limited their use of veto power since 1989. But the post-Cold-War years have seen the US, Russia, and China use the chamber to “exonerate their allies and shield themselves from the consequences of their unpopular foreign policy decisions,” she added.

… any attempt to remove veto powers from the P5 would be “non-starter” and is “not something the big three would ever agree with,” a senior UN diplomat told CNN, meaning the US, Russia and China.

But what could work is “lowercase reform,” say experts and diplomats, who point to a 2022 initiative tabled by Liechtenstein that was adopted by the General Assembly. It mandates that any veto case by the P5 be debated in the General Assembly. While the process cannot overturn a veto, it raises the political cost of the P5 exercising their unilateral power.

The Return of Great-Power Diplomacy
By A. Wess Mitchell

In the summer of 432 BC, the leaders of Sparta gathered to consider whether to go to war with Athens. For months, tensions had been building between the two city-states as the Athenians clashed with Sparta’s friends and the Spartans sat idly by. Now a group of hawks, egged on by the allies, were eager for action.

But Archidamus II, Sparta’s aging king, suggested something different: diplomacy. Talks, Archidamus told the assembly, could forestall conflict while Sparta worked to make new allies and strengthen its hand domestically.

I do bid you not to take up arms at once, but to send and remonstrate with [the Athenians] in a tone not too suggestive of war, nor again too suggestive of submission, and to employ the interval in perfecting our own preparations. The means will be, first, the acquisition of allies, Hellenic or barbarian it matters not . . . [,] and secondly, the development of our home resources. If they listen to our embassy, so much the better; but if not, after the lapse of two or three years our position will have become materially strengthened. . . . Perhaps by that time the sight of our preparations, backed by language equally significant will have disposed [the Athenians] to submission, while their land is still untouched, and while their counsels may be directed to the retention of advantages as yet undestroyed.

At first, Archidamus’s address did not sway the assembly; the Spartans voted for war. But in the weeks that followed, the city realized it was unready for battle, and the old man’s wisdom sank in. Sparta sent envoys far and wide to slow the rush to war and pull other city-states to its side. When war came a year later, Sparta was in a better position to wage it. And when Sparta triumphed two decades later, it was not because it had the better army but because it had assembled a bigger and better array of allies—including an old archenemy, Persia—than did Athens.

Archidamus’s suggestions have worked for countless other great powers over the centuries. Consider, first, using diplomacy to buy time and prepare for war. When new barbarian tribes appeared, the Romans, the Byzantines, and the Song dynasty all made it a practice to send envoys in an effort to buy time for replenishing armories and granaries. The Roman Emperor Domitian struck a truce with the Dacians that allowed Rome to recollect its strengths until a new emperor, Trajan, was ready for war a decade later. Venice brokered a long peace with the Ottomans after the fall of Constantinople to beef up its fleets and fortresses. And the French chief minister Cardinal Richelieu used diplomacy to stall with Spain for nearly a decade so that France could mobilize.

Archidamus’s next suggestion—form alliances to constrain the enemy’s options—has been similarly enduring. The French kings allied with the heretic Lutherans and infidel Ottomans to restrict their fellow Catholic Habsburgs. The Habsburgs allied with the Bourbons to constrain the Prussians. Edwardian Britain cooperated with its colonial rivals France and Russia to join forces against imperial Germany.

In each of these cases, success meant cultivating favorable balances of power in critical regions. This is perhaps the core purpose of strategic diplomacy—and what allows countries to project power far beyond their material capabilities. The Vienna system engineered by Austrian Foreign Minister (and later Chancellor) Klemens von Metternich used the balance of power to extend his empire’s position as a great power well beyond its natural lifespan. German Chancellor Otto von Bismarck pulled off a similar feat in the late nineteenth century. By cutting deals with Austria, Russia, and the United Kingdom, he was able to isolate France and avoid a two-front war that might have strangled the German empire in its infancy.

These leaders never tried to forge partnerships based on anything other than shared interests. They did not believe they could transform hostile countries into friendly ones through logic and reason. They certainly never believed that diplomacy could overcome irreconcilable visions of how the world should be. Their goal was to limit rivals’ options, not seek to remove the sources of conflict. Departing from that logic can lead to catastrophe, as occurred when British Prime Minister Neville Chamberlain met with German leader Adolf Hitler in 1938. Rather than use diplomacy to amplify the domestic and international constraints on Hitler, Chamberlain weakened them by giving him what he wanted in hopes that German expansionism would then cease. Doing so emboldened Berlin and paved the way for World War II.

World War III Begins With Forgetting
By Stephen Wertheim

The United States now faces the real and regular prospect of fighting adversaries strong enough to do Americans immense harm. The post-Sept. 11 forever wars have been costly, but a true great power war — the kind that used to afflict Europe — would be something else, pitting the United States against Russia or even China, whose economic strength rivals America’s and whose military could soon as well.

The specter of full-scale war kept the Cold War superpowers in check. In 1950, Truman sent U.S. troops to defend South Korea against invasion by the Communist North, but his resolve had limits. After Gen. Douglas MacArthur implored Truman to blast China and North Korea with 34 nuclear bombs, the president fired the general. Evoking the “disaster of World War II,” he told the nation: “We will not take any action which might place upon us the responsibility of initiating a general war — a third world war.”

Since the Vietnam War roiled American society, leaders moved to insulate the American public from the harms of any conflict, large or small: The creation of an all-volunteer force did away with the draft; air power bombed targets from safe heights; the advent of drones allowed killing by remote control.

The deaths of more than 7,000 service members in the post-Sept. 11 wars — and approximately four times as many by suicide — devastated families and communities but were not enough to produce a Vietnam-style backlash. Likewise, although the wars have cost a whopping $8 trillion and counting, the payments have been spread over decades and passed to the future.

Not having to worry about the effects of wars — unless you enlist to fight in them — has nearly become a birthright of being American.

That birthright has come to an end. The United States is entering an era of intense great power rivalry that could escalate to large-scale conventional or nuclear war. It’s time to think through the consequences.

A series of recent war games held by think tanks help us to imagine what it would look like: First, a war will likely last a long time and take many lives. Early on, China would have incentives to mount a massive attack with its now highly developed long-range strike capability to disable U.S. forces stationed in the Pacific. Air Force Gen. Mark D. Kelly said that China’s forces are “designed to inflict more casualties in the first 30 hours of combat than we’ve endured over the last 30 years in the Middle East.”

In most rounds of a war game recently conducted by the Center for Strategic and International Studies, the United States swiftly lost two aircraft carriers, each carrying at least 5,000 people, on top of hundreds of aircraft, according to reports. One participant noted that although each simulation varied, “what almost never changes is it’s a bloody mess and both sides take some terrible losses.” At some stage, those Selective Service registrations required of young American men might need to be expanded and converted into a draft.

Second, each side would be tempted to escalate. This summer, the Center for a New American Security held a war game that ended with China detonating a nuclear weapon near Hawaii. “Before they knew it,” both Washington and Beijing “had crossed key red lines, but neither was willing to back down,” the conveners concluded. Especially in a prolonged war, China could mount cyberattacks to disrupt critical American infrastructure. It might shut off the power in a major city, obstruct emergency services or bring down communications systems. A new current of fear and suspicion would course through American society, joining up with the nativism that has reverberated through national politics since Sept. 11.

The economic consequences would be equally severe. A Chinese invasion of Taiwan, which produces most of the world’s advanced semiconductors, would profoundly damage the U.S. and global economy regardless of Washington’s response. (To this end, the United States has been trying to move more semiconductor manufacturing home.) But a U.S.-China war would risk catastrophic losses. Researchers at RAND estimate that a yearlong conflict would slash America’s gross domestic product by 5 to 10 percent. By contrast, the U.S. economy contracted 2.6 percent in 2009, the worst year of the Great Recession. The gas price surge early in the Ukraine war provides only the slightest preview of what a U.S.-China war would generate. For the roughly three-fifths of Americans who currently live paycheck to paycheck, the war would come home in millions of lost jobs, wrecked retirements, high prices and shortages.

In short, a war with Russia or China would likely injure the United States on a scale without precedent in the living memory of most citizens.

The Thucydides Trap: Vital lessons from ancient Greece for China and the US … or a load of old claptrap?
By Andrew Latham

The so-called Thucydides Trap has become a staple of foreign policy commentary over the past decade or so, regularly invoked to frame the escalating rivalry between the United States and China.

Coined by political scientist Graham Allison — first in a 2012 Financial Times article and later developed in his 2017 book “Destined for War” — the phrase refers to a line from the ancient Greek historian Thucydides, who wrote in his “History of the Peloponnesian War,” “It was the rise of Athens and the fear that this instilled in Sparta that made war inevitable.”

Thucydides fought in the Peloponnesian War on the Athenian side. His world was steeped in the sensibilities of Greek tragedy, and his historical narrative carries that imprint throughout. His work is not a treatise on structural inevitability but an exploration of how human frailty, political misjudgment and moral decay can combine to unleash catastrophe.

To read Thucydides carefully is to see that the Peloponnesian War was not solely about a shifting balance of power. It was also about pride, misjudgment and the failure to lead wisely.

Consider his famous observation, “Ignorance is bold and knowledge reserved.” This isn’t a structural insight — it’s a human one. It’s aimed squarely at those who mistake impulse for strategy and swagger for strength. Or take his chilling formulation, “The strong do what they will and the weak suffer what they must.” That’s not an endorsement of realpolitik. It’s a tragic lament on what happens when power becomes unaccountable and justice is cast aside.

Seen in this light, the real lesson of Thucydides is not that war is preordained, but that it becomes more likely when nations allow fear to cloud reason, when leaders mistake posturing for prudence and when strategic decisions are driven by insecurity rather than clarity.

Thucydides reminds us how easily perception curdles into misperception — and how dangerous it is when leaders, convinced of their own virtue or necessity, stop listening to anyone who disagrees.

In today’s context, invoking the Thucydides Trap as a justification for confrontation with China may do more harm than good. It reinforces the notion that conflict is already on the rails and cannot be stopped. But if there is a lesson in “The History of the Peloponnesian War,” it is not that war is inevitable but that it becomes likely when the space for prudence and reflection collapses under the weight of fear and pride. Thucydides offers not a theory of international politics, but a warning — an admonition to leaders who, gripped by their own narratives, drive their nations over a cliff.

Posted in Games.