Hmm.

New article in the Nation about how swimming is especially transphobic.

There is a particularly interesting passage:

World Aquatics, the international federation that governs the sport of swimming, released a new transgender participation policy in July 2022 that essentially bans trans women from competing by creating incredibly restrictive requirements for their inclusion. (As I have written previously, there is no real evidence that trans athletes have an inherent advantage over their cisgender counterparts.)

Frankie de la Cretaz

If you click through those links, you will see they all cite back to the same study, from the Canadian Center for Ethics in Sport, performed by the pro LGBTQ+ group E-Alliance.

It’s key biomedical findings are these:

Tl;DR is that trans-women retain some athletic advantages after 12 months of HRT (bone density, height, strength, muscle-mass), while losing others (hemoglobin density, perhaps muscular endurance).

Okay, got it.

That result is basically confirmed by studies of gender transition in the military and many other studies

If you ever encounter this debate going forward, the actual scientific results are surprisingly consistent:

  1. Controlling for height, transwomen retain some athletic advantages over cis women, but not all.
  2. Some advantages quickly disappear after starting HRT, controlling for height.
  3. Other advantages survive 36 months or more.
  4. Height/weight/body size is an independent source of advantage in most sports.
  5. But trans women become unable to compete on even footing with male athletes soon after beginning HRT.

What I hoped to achieve by writing this blog post is that all of my (two or three) lovely readers learn the facts. Those are the facts, as confirmed by every study, including the one by the Canadian Center for Ethics in Sport.

The Canadian Center for Ethics in Sport discribed their findings a little differently than I would:

Available evidence indicates trans women who have undergone testosterone suppression have no clear biological advantages over cis women in elite sport.

E-Alliance

A conservative might call that description a “lie.”

I don’t think that’s quite correct.

I think that the word “clear” used above is a load-bearing-modifier: a tiny little word that most people skip over, but which turns an obvious falsehood into something basically true. Very important to lawyers. You could rephrase E-Alliance’s description as:

There is insufficient evidence to clearly demonstrate that trans women who have undergone testosterone suppression have biological advantages over cis women in elite sport.

E-Alliance, rephrased.

That’s correct, because there have been basically no studies on trans-women in elite sport. Plus, if you think about it, it’s not clear how you would measure “clear biological advantages” in a specifically elite-sport context. Clear biological advantages compared to whom?

This is all very sad. Trans-women can’t compete with cis-men on even ground, but cis-women can’t compete with trans-women on even ground.

Whatever rules we choose will leave some people rightly feeling pissed off.

Some people think that the cis-female athletes should just shut the fuck up, because trans-women already face startling disadvantages, so they should take the L and move on.

I think that’s reasonable to say to most people.

The problem is that some people care about elite women’s sports more than anything else in the world, so telling them to just shut up is pretty unfair. Those people are called “elite women’s athletes.”

Hmm.

Wow, Dean Heather Gerken actually killed the US News Rankings.

New US News law school rankings came out and they’re a shitshow.

A Duke/Harvard tie is just silly. No student has a hard decision between Harvard and Duke unless their parents are in Durham and are hospitalized. Harvard probably has the best legal faculty in the world.

[NOTE: lol, no, I do not go to HLS]

Rankings for law schools actually matter. Law schools are far more heirarchical than other grad programs, and you want to end up on the right end of that heirarchy.

The law is a punishingly hierarchical profession. Lawyers will unironically talk about an attorney’s “pedigree,” meaning where they went to law school, who they clerked for, etc.

The reason is that in law, your reputation is your biggest asset. Grading outputs is hard, so clients grade inputs, and they grade inputs by the names on diplomas. Furthermore, law is adversarial. An intimidating reputation brings opponents to heel in settlement negotiations and makes judges pay closer attention to you. So, raising expectations of your work-product raises the quality of your work-product.

So, students choose law schools basically just by reputational ranking (plus scholarship money).

That makes USNews’ job pretty easy. Since students filter so effectively, the rankings are clear. It’s where the students historically chose to go. That information is relevant to students, because students need to know the best reputational asset they can buy.

So – let me be clear – the vibes based ranking of schools is the true ranking, whatever other numbers say. Quality of the teaching, resources available to students, all that stuff maybe matters on the margin. But the reputation of the school matters more, because of the signalling value. Since students filter so hard, the name on your diploma tells employers a ton about how smart and hardworking you are. You don’t want to accidentally under-signal your value. Reputations are widely known.

The value of USNews is reporting this widely known information to uninitiated kids. That’s especially useful for kids at public schools who lack career counseling. USNews just has to dress the reputation ranking in algorithms about teaching and whatever so they look less elitist. Easy!

The job is even easier because USNews’ decisions self-vindicate. If USNews makes a hard judgment call, their decision determines where students go, which determines what the ranking should be, which retroactively makes their decision correct.

So like, USNews does not have a hard job.

And yet they still managed to fuck it up? Partly this is because law school deans, who hate the rankings, all conspired to not provide USNews any information.

But USNews should have been fine, since they don’t actually need those numbers; Yes, USNews has to make their rankings by pretending to use an algorithm, but like, lol. If the data-driven algorithm conflicts with the vibes-based ranking, USNews just changes its algorithm.

And, bizarrely, it does not appear that USNews fucked up the ranking by taking vengeance on the schools that wronged them. Cornell and Chicago were the only two top schools that gave US News data this year. Only one of those schools (Chicago) is overplaced, it’s only overplaced by one spot, and it was similarly overplaced last year.

It is not clear how USNews fucked this up. It might be because they de-prioritized inputs and reputation while prioritizing bar passage rates and employment. That’s actually good for telling students which low-tier schools are scams vs. bargains, but it leaves them unable to properly rank top schools, where bar passage rates mostly just reflect how many people go into academia, the Hill, etc.

Since USNews pulls their rankings out of their asses, I don’t know why they couldn’t do both jobs at the same time.

I can do the job they failed.

Here is the true ranking of the top 20 law schools in America

  • 1: Yale
  • 2: Stanford
  • 3: Harvard

_______________IMPORTANT LINE_____________________

  • 4: Chicago
  • 5: Columbia
  • 6: NYU

_______________IMPORTANT LINE_____________________

  • 7: Penn
  • 8: UMich and UVa
  • 10: Berkeley
  • 11: Duke, Northwestern
  • 13: Cornell
  • 14: Georgetown, UCLA

_______________IMPORTANT LINE_____________________

  • 16: UT,
  • 17-19: Vandy, WashU, USC
  • 20: Minnesota, BU, Notre Dame, maybe GW

There are tiny ways you could quibble with this. If you want to become an academic, choose Chicago over Stanford. If you want to go into government, choose Georgetown over Cornell.

And, my ranking ignores high-risk strategies. If you go to Harvard and win moot, that might be more impressive than anything a person could do at Yale.

But, basically, the above ranking is correct.

I May Have Been Right About Tests

A while ago I said that colleges faced a vicious cycle that made them all go SAT blind:

  1. Go test optional to raise your SAT average and lower your acceptance rate
  2. But this raises test averages at every college, and it means the SAT is less useful since lots of kids don’t take it
  3. So the schools that have low test averages even though they’re test-optional go test-blind
  4. So now colleges with decent averages look like they have low average scores
  5. So everyone has to go test blind.
  6. But now no one can identify promising kids, which sucks.

Was that model right? It looks like it was. Why do I say that? Because a bunch of law deans have objected to a policy that would allow law schools to not require LSATs.

If a law school thinks the LSAT is good, you would think they could just keep requiring it themselves. But the schools want the rule to come from the ABA, not from the law schools, so there is a collective action problem.

Probably it’s the problem I listed above.

American Iconoclasm

People like tearing down confederate monuments. Seems largely sensible.

Some worry there is a slippery slope.

Circa 2016 the respectable position was to deny that such a slope existed: no-one would desanctify the good-but-problematic guys like George Washington.

Then in 2020 people tore down some statues of George Washington in Portland.

To be fair, that was just Portland. Nothing happened irl.

But then you started seeing committees in real life places like NYC, SF, and DC removing the names Washington and Jefferson from places of honor.

And now there’s even some support to delionize Lincoln.

C’est la vie.

To be fair, Lincoln, Jefferson, Washington all did pretty bad things.

Sally Hemings was like 15 when her relationship with Jefferson started, even it was sort of consensual.

So there is some intuitive sense to dethroning these guys.

But the trouble is that they were basically the good guys. You can tell if you read their letters. Or, compare them to post-Bolivar and post-Louverture elites. We have a lot to thank them for.

And yet they still did pretty reprehensible things. The past was pretty backward.

Matt Bruenig accordingly concludes that you shouldn’t valorize people, but rather valorize acts. No one is really pure enough to beatify, and everyone looks bad in hindsight.

https://twitter.com/MattBruenig/status/1656717114122592256

I think Bruenig is basically correct, insofar as he is talking to sane, rational, educated adults.

But what Bruenig misses is that lionization is for children.

Some facts about children:

  1. We have to teach children history so they’re not ignorant little fucks.
  2. We also have to teach them morality and social norms.
  3. We have to convince them that following social norms is a good idea.
  4. Children like big powerful things, especially boy-children.

Here is a five-step-plan for doing all of that at once:

  1. Teach children exciting stories from history.
  2. Make it very clear who the heroes and villains are by exaggerating the heroes’ virtues and minimizing their flaws.
  3. Make the heroes be cultural relatives of the children, while making the villains sort of foreign (maybe the villains are British!) so the children identify with the heroes.
  4. Make the heroes win. Maybe they die (like John Henry) or they lose the war (like Athens) but eventually history vindicates them.
  5. Tell the children that the stories are literally true.

You may have noticed that the five-step plan above is followed in every culture ever, anywhere on the globe. Sometimes people go to wild lengths to fit history to the plan. Medieval Persian poets typically portrayed Alexander as the secret rightful heir to the Achaemenid throne.

Using the plan, you can teach kids to be moral while teaching them semi-accurate history, and convince them that being moral is a good idea: the good guys win! You make kids think the stories are cool, rather than lame, by having the characters be actual successful people from reality, and occasionally making ultraviolent movies about them.

You can see how this process doesn’t work if you’re just lionizing actions rather than people, as Bruenig suggests. If you wanted to teach morality, you would have to say, for every action “… and this was good” or “… and this was bad.”

That would not be efficient.

Furthermore, you couldn’t convince kids that good triumphs. We can’t expect kids to calculate the expected value of goodness by tallying up exactly how much good stuff characters did and how much they succeeded. What we can expect kids to do is to see that the good guys won.

It’s good to convince people that good guys win and that defections will be punished. Trusting societies are the societies where people cooperate to punish defections. Unsurprisingly it is trusting societies that succeed economically.

One way to convince children that heroes win is to make up some heroes and put their names on important stuff.

Then, you grow up, and you realize that history is more nuanced than the history they taught you as a kid. Maybe you get disillusioned; maybe you go ancom in college or leave the Baptist church. But you still have this deep irrational sense that you should cooperate for the good of the group. Which was the point.

So, if we’re to remove George Washington’s name from things, it should be because we have better options.

Probably the right move is to just pretend George Washington was black.

Institutions are the smartest dumbest.

One of the classic SlateStarCodex posts is about how pathetically easy it is to be smarter than institutions.

Here is a great example of that.

Robert Contee is chief of metropolitan police, saying that violent crime in DC has fallen in DC since 2015, and Robert Contee is obviously, obviously incorrect.

If you’ve been in cities in the last ten years, you’ve noticed that there’s a lot more crime. And yet Contee says that violent crimes are down.

He’s obviously wrong. That’s why your intuition disagrees with his numbers.

What’s going on is that reported violent crimes are down since 2015. But that’s not because there are fewer crimes. It’s because fewer crimes are being reported. Since 2015, Urban crime victimizations have increased, but victims are much less likely to report them to police than in 2015, especially black victims in urban areas (who are almost all victims of violent crime in DC). We know that from the national crime victimization survey, which gives us a pretty good idea of the rate of unreported crime.

The reason for the decline in reporting is that cops have basically stopped policing in the United States, especially in cities. The total number of arrests in the US is basically 30% of what it was in the oughts, with a sudden collapse in 2020.

Even though, over the same time, rates of victimization haven’t changed much, and have plausibly gone up.

So, a decline in reported crimes doesn’t mean there was a decline in actual crimes. It just means cops aren’t finding out about them.

How can we tell how many crimes are actually happening?

Look at crimes that always get reported.

Which crimes are those?

Homicide and carjackings. Every criminologist knows this.

Homicides are basically always reported because there’s a body. Carjackings are always reported because the owner has to report the crime to collect insurance.

So, what’s happened to murders in DC since 2015? They’ve risen almost 50%.

What’s happened to carjackings? They’ve tripled in the last 5 years.

From that we have pretty good reason to believe that crimes have risen overall.

So, no, crime has not fallen in DC since 2015. Crimes have probably increased, and simultaneously became much less likely to be reported.

So it’s interesting and maybe sad that high up law enforcement officials would say otherwise.

Perhaps the DC chief of police doesn’t know basic ground-level facts about crime in the district.

I think probably not. I think it’s more likely the chief of police will parrot the official statistics that he knows are wrong, because it wouldn’t be worth constantly explaining to people how the statistics should be interpreted.

If you read the thread linked above, you can see that the police chief wants to get people to ignore the statistics even though he doesn’t directly contradict them. That sounds like he knows the stats are wrong and he’s just not going to bother making his actual decisionmaking process explicit. So he pretends to believe in the spurious fall in crime but think we shouldn’t care about crime falling because people are upset by perceptions of crime. Unlikely.

And what’s funny is that what makes him keep his deliberating process secret probably isn’t anything suspicious. I doubt he’s trying to hide the numbers, crime waves can be good for police chiefs, since they bring funding, prestige, etc. I think explaining reality to the public would just be a hassle, and it’s not worth his time.

And now it’s time for a total about-face 60% of the way through this blogpost.

This is a topic that interests me: science, economics, politics, and other high end professions appear to have their debates in public, where everyone can look at them. But, really, they don’t.

Really, all major arguments happen behind closed doors. True, public discourse does inform these secret debates. And, there’s a decent amount of semi-public debate, on blogs, twitter, etc. That debate is sort of anonymized, but sort of public, but sort of on the DL.

For example, everyone important in journalism and policy-world reads internet racist Steve Sailer, even people who aren’t racist, because he has useful information to provide even to sane, regular people. But you can’t admit that you know the things you learned from Sailer. But they are good to know. There is a purpose served by keeping some things sub rosa.

Other parts of the semi-public discourse are more transparent to the public. Twitter isn’t all anons, after all.

But even that relatively-public side of the discourse doesn’t exactly happen in public. It’s structured around personal connections within circles of influence, and what appears in the public are just the emissions of the real discourse. For example, Matt Yglesias is very influential in the Biden white house as a blogger. In some sense he is involved in public discourse.. But, why is he influential? What is his special skill? It’s that he knows a lot of important people in DC and they say things to him that they wouldn’t say in public.

And I personally am a participant in this. This is technically an anonymous blog, so I will not say exactly what grad program I am in, but suffice it to say that I have gotten to hang out with important people, who have said stuff to me like:

  • The CDC has basically lost the plot (said by a former head of the FDA)
  • The Supreme Court is expecting that its decisions will force the end of the filibuster (said by a SCOTUS judge)
  • Grutter (the decision that made affirmative action legal) was an intentionally dishonest decision, since the reasoning didn’t fit the facts. SCOTUS was telling lower courts to enforce the law differently from how it was written down (said by someone who’d been clerking on SCOTUS at the time)
  • The Biden administration is influenced by “Latinx Voices” type nonprofits solely because journalists go to these nonprofits to get quotes on the public’s response to white house decisions (Said by one of the top advisers in the Biden administration).

None of these people would have said this stuff in public. Sure, these things aren’t exactly inflammatory, but they’re something that you might be cautious of putting in the public record in case it’s held against you. So they wouldn’t have told me if I was a journalist. But these weren’t said to me in private either. I met these people through school adjacent events, and at those events there’s an understanding that one will be discreet.

So, my sympathies to the conspiracists out there. There really is a secret world where power sits; everything that happens in public is a big lie.

What I will say, sitting inside the cathedral, is that most of the people in charge seem good at their jobs.

Big Eureka

Big Eureka.

Unironically, Big Eureka is probably the best YIMBY idea out there.

Let me explain.

One bizarre thing about the NorCal/SoCal divide is that “Northern California” is actually in the middle of the state.

The actual northern part of California is mostly just wilderness, except for the dying industrial centers of Chico and Redding, and the beautiful coastal town of Eureka (pop. 26,512; metro area 45,034).

Eureka sits on a wet part of the northern California coast, so it was settled long before California developed the elaborate irrigation and pipeline systems that rocketed the central valley and Los Angeles to their modern importance.

Eureka also has one of the mildest, most pleasant climates in America, tempered by the cold waters of the Humboldt bay. It’s also only a short drive from the scenic Trinity Alps and the Six Rivers National Monument.

So, Eureka had a lot of growth in the 19th century and the early 20th. By 1920 the city had reached half it’s present population, and by 1960 it had a greater population than today. So, the city has a quaint turn-of-the-century old-town:

And a lot of cool midcentury moderns:

Eureka is cool!

So why did it stop growing? Simple: growing would be illegal.

Eureka, like many California cities, has an “urban boundary” beyond which development is restricted to “rural” uses, which means pretty much nothing. That means Eureka is trapped in it’s old borders (plus the neighboring villages of Rosewood, Cutton, and Myrtletown, which are roped into the urban boundary as well). Look at the picture of Eureka above. You can trace the urban border yourself.

Eureka also has strict zoning that prevents building more than is already there.

That means that living in Eureka is expensive. An extremely modest home goes for $3-400,000:

While a normal, middle-class house might sell for nearly a million:

Paradoxically, living in Eureka is so expensive that the population is falling as residents are slowly pushed out by tourism and second homes.

Which is why Big Eureka is the best YIMBY idea that exists. Most small cities can’t grow because they can’t outcompete the “agglomeration effects” that a big city offers. So, Boston and New York strain at the seams while Hartford, Pittsburgh and Syracuse are basically free to live in. That leaves us pretty much stuck with the big cities we have, which is bad because city locations used to be determined by things that don’t really matter any more (like water transit) but cities today should be located next to value-adding amenities like mountains and beaches.

So the agglomeration effects trap us in sub-optimal urban locations. That means that normal YIMBYism can be sort of bad: it unlocks growth in New York and Boston, but that keeps us stuck in New York and Boston even if they aren’t really the best places for cities any more.

YIMBYism can’t help most small, perfectly located cities to grow because they’re pretty cheap to live in anyway. They just aren’t desirable because they lack agglomeration effects.

Eureka is the exception. It’s so nice that it’s crazy expensive even though there’s nothing there.

Which means you can make it into a big city just by relaxing the land-use regulations.

Eureka is just the best place for a big city. Perfect weather, no risk of drought, and near beautiful scenery and productive farmland.

And yet Eureka is just a little town.

Eureka is emblematic of the infamous phenomenon known as “The Lack of Shit Between Portland and San Francisco,” shown below:

It’s weird that there’s not more shit there? The Southern Coast has Santa Barbara, San Luis Obispo, Monterrey, Santa Maria, Salinas, Santa Cruz, etc.

The East Coast has a big city pretty much anywhere you could safely live, and the smaller coastal cities of the South (Savannah, Charleston, Wilmington) are growing as fast as you can build homes.

The Norcal coast basically just has Eureka. And Oregon has almost nothing.

It’s Big Eureka time.

A Socially Conservative Vision for the United States

I consider myself a social conservative. By that I mean that I would endorse the following package of laws, which are aimed at the suppression of vice.

Rule 1: All names for babies must be chosen from a list of ~1,000.

If you wish to choose a name not on the list, you must document that it is a name in your culture, a name in some society’s classical or medieval literature, a family surname, or a commonly known proper noun associated to the family (e.g. “Tacoma”).

Rule 2: No name may be issued to more than 1% of babies in any 12-month period.

If you wish to use a name that has hit the cap, it must be the name of the infant’s close relative or godparent.

Rule 3: There shall be one superhero film every two years. Like the Olympics, DC and Marvel trade off.

There is ONE exception. For each major Oscar a studio wins (Picture, Director, Screenplay, Editing, Actor, Actress) on a superhero movie, that studio may make one additional superhero movie.

Rule 4: Copyright on films extends no longer than the life of the director, first screenwriter, and up to two leads.

Rule 5: All commercial video services and social media are turned off from midnight to 6:00 AM, except for direct messaging and A/V chat.

Rule 6: Dating or “social matching” services may only “match” each person with one other person per day.

Rule 7: Having an adult Pitbull that is not spayed or neutered is a criminal offense. Breeding Pitbulls is a separate criminal offense.

Rule 8: Weed is decriminalized, but there are fines for sale (return to the laws of New York and Connecticut as of ~2018).

Rule 9: Gambling is only legal in Las Vegas, Atlantic City, and according to Tribal law.

Rule 10: All gambling funds must be transferred through one “gambling account.” If this account goes in the red by more than $3,000 in a 12-month period, no more funds can be transferred to the account for 12 months.

Rule 11: There are ten college majors, excepting the technical fields (engineering, actuarial, accounting, health professions, and agriculture).

The ten majors are:

  1. Physics
  2. Chemistry
  3. Biology
  4. Mathematics
  5. Ancient literatures (must learn Greek, Latin, Classical Chinese, Hebrew, Arabic, or Sanskrit)
  6. Modern literatures (must learn one modern language.)
  7. Philosophy
  8. Economics
  9. History
  10. Art/Art History

Colleges may offer concentrations within majors (e.g. “Theatre” within Literature, or “Political Science” within Economics). However, every major must have four core classes. For literature majors, the core classes depend on the language being learned.

Rule 12: Middle and High school students must participate in a range of extracurriculars such that students leave school around 6:00 PM (guidelines based on Success Academy).

Rule 13: Elementary schools must offer after-school care extending to 6:15 PM and on school holidays.

Rule 14: In middle and high schools, in each section of each class, the ranking of the boys in the section is posted publicly. This ranking is updated every week.

Students may opt out, but if so they cannot play boys’ sports.

Anyone of any sex can play on boys’ sports teams, but they must agree to have their academic rankings publicly posted.

Rule 15: Receiving government benefits for any child (SNAP, CHIP, EITC) is conditioned on that child’s school attendance. All other conditions (e.g. asset limitations) are removed, except for the trapezoidal structure of the EITC.

Rule 16: Tipping in bars, restaurants, cafes, and taxi services is illegal.

Rule 17: Public Sector union organizing is illegal.

Rule 18: Antitrust laws are enforced by specialized antitrust courts in which judges are Daubert-qualified Industrial Organization economists.

Rule 19: there is no lottery for H1-B visas.

Rule 20: All prisons are privatized. Payments to imprisonment firms are conditioned on the recidivism rates of their former convicts.

[Edit] Rule 21: It shall be illegal for a child under the age of 16 to be in possession of a smartphone or tablet device. [Can’t believe I forgot to put this in the original list]

Everyone’s Understanding of Redlining is Wrong

Everyone’s talking about redlining, but most of these discussions are bad. Very few people actually know what it was, and even fewer know its effects.

Redlining improved housing affordability for black Americans, especially poor ones. Neighborhoods weren’t “red lined” because they were black. Neighborhoods became black because they were red lined, and red lining improved affordability for renters.

Other housing discrimination was bad. Red lining was probably the only form of housing discrimination that maybe helped black people.

Strictly speaking, the “redlining” phenomenon is that in the 1930s the Federal Housing Administration started underwriting mortgages. However, they only wanted to under-write mortgages that wouldn’t default (IE mortgages for properties that weren’t going to decline in value). Now that the government was investing in mortgages, they wanted to make the appraisal process more efficient. So they requisitioned various efforts to improve appraisal practices.

One thing they did was commission an economist to make maps of which neighborhoods were likely to decline in value. There were four categories of neighborhoods: A the best, D the worst. D neighborhoods were marked with red highlighter (red lines). The easiest way to mark neighborhoods that were going to decline was based on their racial and ethnic demographics, so that’s what the economist did. That ended up being pretty accurate. Then the government basically forgot about these maps, but some banks continued to use them to decide where they would sell mortgages.

What did those maps look like? We usually think of “red lined” neighborhoods as being black neighborhoods, but actually, when the maps were drawn, about 85%, of residents in red lined neighborhoods were white. Most urban black neighborhoods were redlined. But there weren’t that many of them. In the early ’30s, black people were still overwhelmingly Southern, and the South was still overwhelmingly rural.

Today, redlined neighborhoods are maybe plurality black. What changed? Simple: there was a lot of migration by black Americans to redlined neighborhoods. Not only that, black Americans tended to move more to cities if they had redlined neighborhoods.

There’s good data that this was causal. We can tell because there was a sharp population threshold that decided whether you got a map or not. Cities right above the threshold had substantially more black in-migration than cities right below, and most of that immigration was to the redlined neighborhoods.

The causal effect extended far beyond traditionally black neighborhoods. The effect of redlining on in-migration and property values was highest on “yellow-lined” neighborhoods (category C), which had zero black residents when the maps were drawn, but which the map drawers thought would decline in value (often they were immigrant neighborhoods).

If we assume that people are at least mildly rational in the aggregate, that means that redlining increased quality-adjusted housing affordability for black Americans.

(It’s pretty hard to overcome that assumption. You have to come up with some reason that a policy that makes life harder for someone also makes them more likely to move there. I don’t think you can.)

So, at least in the short run, redlining made a city more livable for black people.

Why? Well, we observe that redlining discouraged white people from moving in. That’s probably because of how it changed the behavior of private banks, since it had at most a very small effect on where the government would subsidize loans.

How did the maps change private behavior? Probably by encouraging white flight. A common feature of midcentury housing markets was white people deciding not to live somewhere for racist reasons and that area consequently becoming affordable to black residents. [Note: in the 1930s-40s, gentrification was more severe than today, and urban decay less of a problem, because suburbs were smaller and the urban population was rapidly growing].

Redlining marked out certain areas to not provide mortgages. At the time, black people were desperately poor, and so owning urban real estate was basically out of the question for them, but white homeownership rates were very high. Since redlining marked out areas to not provide mortggages, it essentially created areas for white people not to move, which lowered real estate prices for the people who would move there. Being redlined caused real estate prices to fall about 15%. That’s great for renters, since it’s entirely a fall in the value of the land in the places they wanted to live. Granted, investment will likely be lower, but, in the end, black Americans have paid lower rents for materially equivalent property than whites since about 1970 (That’s at the neighborhood level. At the household level things get more complicated).

Now, unfortunately, the mechanism by which redlining improved affordability also increased residential segregation. Probably residential segregation is bad? Raj Chetty has data that leaving highly segregated neighborhoods helps poor kids, except only under highly specific conditions?

So, it’s possible that even if redlining made cities more accessible in the short run, it created bad social outcomes over time. However, that’s speculative. Furthermore, it’s hard to test because the population drawn to a highly segregated city (where homeownership and upward mobility are more difficult, but immediate options are good) will be predictably skewed.

So why does redlining get such a bad rap?

I think it’s because people confuse it with other housing discrimination that was actually bad.

Some of this discrimination was statistical, and still quite pernicious. There’s strong evidence that banks in the ’30s would decline to provide mortgages to the first black family on any block, since that would lower home values, and banks tended to have a lot of other investments in the same area. They didn’t want to lower the value of their own assets.

There’s probably also, to this day, some taste based discrimination.

It’s just bizarre that our byword for racist housing inequality is the one form of inequality that was plausibly pro-black.

Elon’s Battle Against Substack is Probably Illegal

Twitter has a new policy that any tweets with links to Substack cannot be retweeted, liked, or commented on.

The reason is pretty obvious: Substack is a competitor to Twitter for attention and the take-o-sphere. In particular, Substack just announced a Notes feature, which is transparently a twitter clone.

This seems like anticompetitive conduct, which would be illegal under Section 2 of the Sherman Antitrust Act, so long as Twitter is a monopolist.

If Twitter is a monopoly, the big legal question is whether a company is “attempting to exclude rivals on some basis other than efficiency”  Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585, 605 (1985). So if Twitter is (1) worsening its own product to harm Substack, and (2) Twitter has monopoly power, this is an antitrust violation.

(Legal Note, you might also have to show that the ban hurts Substack enough for the ban on retweets to be a semi-reasonable strategy by Twitter . The ban has to plausibly hurt Substack more than Twitter. There has to be enough evidence for a reasonable jury to find the conduct would “likely result in sustained supracompetitive pricing.” Brooke Grp. Ltd. v. Brown & Williamson Tobacco Corp., 509 U.S. 209, 226 (1993). That rule usually comes up in predatory pricing, where over-enforcement of antitrust law is a big problem. Here, I think Twitter would have a hard case. Restricting traffic clearly hurts Substack more than twitter, given the network effects at work).

Banning retweets of Substack links pretty clearly makes twitter a worse product. Many people use twitter to promote their Substacks, discussion on twitter is often about Substacks, etc. Allowing retweets improves ~the discourse~. But you can see how banning retweets hurts Substack more than it hurts Twitter: Twitter is a lot bigger and there are network effects here, so open flow between Substack and Twitter improves Substack a lot more than it improves Twitter, so cutting off Substack keeps Substack from growing. 

Twitter did a similar ban on links to Mastodon a while ago but gave that up after a while. In the Substack case I think the antitrust claim is stronger because linking to Mastodon doesn’t obviously make twitter better, but linking to Substack does – links to Substacks form a focal point for a discussion on Twitter, which is a lot of what Twitter does.

Twitter somehow needs to argue that links to Substack somehow make Twitter worse. Perhaps they can argue that when people follow the links, they’re on other websites and the Twitter discourse isn’t as fun.

However, that argument is going to be hard to make, because Twitter doesn’t similarly disfavor links to other news, opinion, and blogging sites like WordPress. Choosing to allow links to most blogs/opinions and only punish links to blogs that happen to be hosted by a Twitter competitor strongly suggests anticompetitive purpose, and also suggests that the ban makes Twitter worse. Twitter maintains features to have external links and allows external links to news, because that feature improves Twitter. It follows that limiting links to Substack probably worsens it.

There’s the further question of whether twitter has monopoly power. It’s not clear how you would define the market, but they clearly have power over Substack. Substack has data on where people came to Substack from. I don’t have that data, personally, but anecdotally it seems to be that most non-subscriber traffic on Substack comes from Twitter:

European Case 

I think the DMA has an especially strong case against Twitter. EU Digital markets act says that gatekeeper platforms cannot “prevent consumers from linking up to businesses outside their platforms.” Here, twiteter isn’t explicitly banning links to outside websites, but banning likes and retweets will surely prevent a lot of people from linking to Substack who otherwise would. So it’s a murky case.

It looks like this DMA rule is why they’re limiting links to Substack rather than banning them entirely.

Against Qualification

There is a common verbal tic in fancy liberal non-profit circles of prefacing almost everything one says with some variant on “I think.”

“I think the case for that is Exxon Mobile v. Allapattah”

“I believe it costs $7”

“I’m pretty sure it’s over here”

Invariably, the statements so qualified turn out to be correct.

It is the nicest and most conscientious people who use these qualifications. Nevertheless, they are a complete waste of everyone’s time.

Why? Because the more that people use qualifiers, the more that other people are forced to use them. Why? Because the meanings of words are simply their conditions of use. One condition of use is the certainty of the speaker. If other people use lots of qualifiers, you have to as well.

This is because of a quirk of the English language: qualifiers usually do not signal explicit levels of certainty. Instead, we signal certainty using qualifiers that have a rank order of confidence.

Common qualifiers in ranked order are:

  1. “Don’t quote me, but it might be…”
  2. “It might be…”
  3. “I think it might be…”
  4. “I think it’s…”
  5. “I’m pretty sure it’s…”
  6. “It’s…” [No qualifier]
  7. “I’m sure that it’s…”
  8. “It’s definitely…”
  9. “I’m absolutely certain that it’s…”
  10. “Trust me. It’s…”
  11. “I swear to God. It’s…”

Some of them are phrased as expressions of probability or credence. Others are phrased as solicitations of trust. But the difference is mostly superficial.

When one explicitly uses a probabilistic/credence qualifier, there is an implicit recommendation that you should/should not rely the assertion, given it’s high/low probability.

Similarly, when one solicits trust, there is an implicit assertion that one is certain enough that trust is merited.

Functionally, probability qualifiers and trust-solicitation qualifiers are equivalent most of the time.

But very few of our qualifiers are phrased in terms of explicit probabilities. People don’t go around saying “I am 78% sure that X.” People instead say “I’m pretty sure that X.”

Even some qualifiers that appear to signal explicit probabilities actually do not. For example, one could interpret “I’m certain that X” to mean “There is a 100% probability that X.” But, you shouldn’t. Yes, in some abstract sense, certainty is believing in a 100% probability of truth. But the practical sense of certainty is different. Saying “I’m certain that X” doesn’t signal a 100% probability in normal speech. Otherwise, we wouldn’t need the phrase “I’m absolutely certain that X.” You wouldn’t think that someone lied to you if they told you they were certain that Chicago is west of Atlanta because they were 99.5% sure. You understood what they meant because they spoke in the conventional way. Qualifiers aren’t tied to explicit probabilistic benchmarks.

Instead, meanings are tied to use ad-hoc. You know how certain other people are when they say “I’m pretty sure that X” because you know how often they turn out to be right. So, you know how certain you should be in order to say “I’m pretty sure that X.” You just need to be as certain as other people are. You need to be as likely to be right as other people are when they use that qualifier.

This has two implications:

First, A SPEAKER should try to conform to the behavior of other speakers. They should use the qualifiers that everyone else uses if they want to inform.

Second, A LANGUAGE is best and most efficient if unqualified speech is the most common.

That is, there is a normal amount of certainty that you have when you speak most of the time. It is best if the typical person, in that circumstance, says what they mean without qualification. If the typical way to start a sentence is with “I think that…” then everyone’s time is being wasted. Not only is time wasted, mind is wasted. Sentences with extra clauses are more difficult to parse and to construct, and so thought is constrained. So, it would be best if the regular degree of certainty were attached to the shortest, simplest sentences.

But, if wishes were horses…

Instead, there are many domains and communities in which people are constantly saying “I think that” and almost never saying “I’m certain that.”

[In this (as in so many cases) the French are the only people who know how to live. In French discourse, there is almost perfect parity between use of crois and of certainement. Many lovely flowers bloom in the soil of French arrogance.]

I’m sure you know the communities where speech is loaded in qualifiers. They’re the nice liberal communities where parents listen to NPR and you hear the phrase “people experiencing homelessness.” They’re the communities where conservative boys all think they’re oppressed and everyone is appalled at spanking. Sociology departments. Park Slope. Yale.

Why does this happen? Here are some reasons.

First, people don’t want to turn out to be wrong. They especially don’t want to turn out to be wrong on a matter that someone relied on. So, people would rather appear less confident than they really are than appear more confident than they really are. Highly agreeable and conscientious people will be especially afraid of appearing more confident than they are, since they don’t want anyone to incorrectly rely on them.

Second, people in these communities just do not like you to be confident. They sympathize with the meek and the shy. They dislike the bold and the cocksure. So, if you want to be liked, adding qualifiers is an easy way to do it. These are highly egalitarian communities. They don’t like it when you make a big splash.

Third, these are high agreeability communities where it is considered rude to contradict or question people. Think of every time you’ve gone to a seminar and seen a transparent flaw in the speaker’s argument but you didn’t say anything.

[I have no model for why fancy Americans act this way. Elites in America, England, and Canada have strong norms against overt disagreement or candor. This manifests in American pathological positivity and English shyness. Scots, Irish, Dutch, Germans, and Australians do not have these norms, despite being similar cultures.]

Despite these norms against contradiction, you are allowed to question people who aren’t very confident. If someone says “I think that X,” it’s easy to ask why they think so. If someone instead says “X,” it’s a little rude. That means that if someone wants to be maximally informative, they should under-signal their confidence. That way, others can comfortably inquire further.

But the trouble is that you end up on a treadmill where everyone races to appear less confident than they really are, and in the end qualifiers have consumed clarity. This resembles the Euphemism Treadmill where you try to avoid using words with negative connotations, so your new words get the same connotations the old ones had for the exact same reasons the old ones did.

And, the communities where everyone is using too many qualifiers are exactly the places where the euphemism treadmill runs in high gear.

Both euphemisms and qualifiers are also similar to tipping. Tipping is a nice thing to do. While there is a relatively standard tipping amount, it’s not perfectly solid. Should you tip 18%? Should you tip 20%? The nice thing to do is to tip more than is usual. But that means that if everyone is nice, then tipping amounts go up over time (typical tip in 1970 was 15%).

However, in the long run, more tips don’t mean waiters end up with higher compensation. As tip amounts increase, official wages will go down, since tips are priced into the wages of waiters and costs of meals set by the market. Tip amounts don’t change the actual supply and demand for waiters. Unfortunately, tips are worse than wages for lots of reasons, and a system where most waiter compensation is in tips is a bad system. So, over time, the world gets worse because everyone is being nice.

My point is, stop using so many fucking qualifiers.

Design a site like this with WordPress.com
Get started