When the Pruitt–Igoe social housing complex was built in St Louis in 1954, it was hailed by journal Architectural Forum as “the best high apartment of the year”. By the late 1960s the complex had become a byword for crime, decay and deprivation.
Pruitt–Igoe should have worked in theory. Although it was bold and massive, such designs found favour between the 1950s and 1970s as an answer to postwar housing shortages and population increases caused by the baby boom.
But the unintended consequences were disastrous. To take one example, skip-stop lifts, which stopped every three floors, were designed to get people mingling with each other. They instead served to make it impossible to tell who was a neighbour and who was a stranger.
Within years, the project was doomed. It was finally demolished in 1972, just 16 years after it was completed. The event was described by Charles Jencks as “the day modern architecture died”.
The architect of Pruitt–Igoe, Minoru Yamasaki, was no fool. He was one of the most prominent architects of his generation, going on to design the World Trade Centre.
The extent to which the decline of the complex can be laid solely at the door of the design is under debate. But one confession from the architect tells its own story:
I never thought people were that destructive.
The design of Pruitt–Igoe, like many mid-century architectural projects, relied on an idealised view of humanity. The flaws in those ideas became apparent within years.
Similarly, the designers of digital products like Facebook and Twitter have failed to account for the bad actors that would end up using their platforms.
We are now beginning to live with the consequences of extremists manipulating the system to spread misinformation. It is something the designers never expected. Their failure to understand the problem quickly enough is leading to a decline in the quality of content online, and is making it increasingly impossible for people to tell fact from fiction.
The problem is rooted in a fundamental failure to think through the ethics of their decisions.
Last week, Twitter suspended its blue tick verification system after admitting that the concept was broken. The decision came after a blue tick was awarded to the organiser of an extremist rally.
Twitter co-founder Jack Dorsey admitted that they were too slow to act on the knowledge that their system is flawed.
We should’ve communicated faster on this (yesterday): our agents have been following our verification policy correctly, but we realized some time ago the system is broken and needs to be reconsidered. And we failed by not doing anything about it. Working now to fix faster. https://t.co/wVbfYJntHj
— jack (@jack) November 9, 2017
Twitter has become a cavern of noise. Its design incentivises users to express viewpoints in an unsophisticated manner, leading to its reputation for being a cesspit of abuse.
watching twitter employees realize how bad twitter is is like the scene from 2001 where the apes learn how to use tools pic.twitter.com/0wLeITKJ3g
— Stephen Lowe (@TheWikiHowGuy) October 4, 2017
In an important Twitter thread, Kumail Nanjiani outlined his concerns that people working in tech simply are not thinking through the ethical considerations of what they are designing.
And we'll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech.
— Kumail Nanjiani (@kumailn) November 1, 2017
"We're not making it for that reason but the way ppl choose to use it isn't our fault. Safeguard will develop." But tech is moving so fast.
— Kumail Nanjiani (@kumailn) November 1, 2017
The view at Facebook is that ‘we show people what they want to see and we do that based on what they tell us they want to see, and we judge that with data like time on the platform, how they click on links, what they like.’
These may seem like useful ways to measure success. But this attitude is far too narrow. It focuses only on individual users, without consideration for the wider impacts on society. Just because someone clicks a link or spends time on a page, it doesn’t mean it done anyone any good.
We have all followed a link with a clickbait headline and regretted it. We even sometimes click links when we know we really shouldn’t.
It’s like when you drive past a car crash. You can’t resist taking a look out of curiosity, even though you know you probably shouldn’t. It’s a sad fact of human nature.
If a road network was run like Facebook’s algorithm, it would see that people like looking at car crashes, and it would cause more car crashes. When algorithms prioritise shallow metrics above all else, it sets off a vicious cycle. Before you know it, there would be car crashes all over the road.
That is why the quality of content on Facebook has declined over time. It is designed for digital rubbernecking. Because Facebook’s algorithms now prioritise this low-quality content, it has incentivised publishers to enter a race to the bottom.
This has left us all worse off. The initial promise of the web has been eroded by the tech giants who are making publishers — both traditional and independent — vulnerable to the whims of the latest algorithm tweak.
ProPublica found that Facebook made it possible for advertisers to target people expressing an interest in topics such as “Jew haters” and “how to burn Jews”.
The reality is that Facebook and the other big digital platforms have no incentive to stop this kind of behaviour, because this is how they make their money.
Witness a Facebook lawyer’s inability to tell Senator Al Franken that Facebook would refuse to run US political ads paid for in North Korean currency.
Senator Al Franken frazzles Facebook's top lawyer with a simple yes or no question. This is hilarious & quite sad.pic.twitter.com/Wy6SMbTCXb
— Ricky Davila 🇵🇷 (@TheRickyDavila) November 2, 2017
In response to the revelation that it was possible to target ads at Jew haters, Facebook’s chief operating officer Sheryl Sandberg said:
We never intended or anticipated this functionality being used this way…
It is a chilling echo of the mid-century architect’s confession.
Just as Minoru Yamasaki failed to anticipate how destructive people could be in the buildings he designed, Facebook and Twitter have failed to anticipate how destructive people would be on their platforms. And just as the Pruitt–Igoe and other social housing projects fell into dramatic decline within years, Facebook and Twitter are suffering a similar fate.
Architects had to learn the lessons from the mistakes made in their bold designs of the mid-20th century. Now it is time for today’s massive digital platforms to face up to the reality of what they have created.
They need to truly understand the people using their products — and not just at the narrow, individual level where a like or a click is valued above all else.
More lessons from the mistakes of postwar social housing: