Saturday, September 26, 2020

 

Trump’s Selection of Amy Coney Barrett for the Supreme Court Is Part of a Larger Antidemocratic Project 

By John Cassidy The New Yorker


If the Senate confirms Amy Coney Barrett, five Supreme Court Justices will have been selected by a President who initially won the White House while losing the popular vote.

 

On Friday evening, CNN, the New York Times, and other media outlets reported that Donald Trump had told associates that he has chosen Judge Amy Coney Barrett, a prominent social conservative, to replace the late Justice Ruth Bader Ginsburg on the Supreme Court. Although Trump’s choice of Barrett—whom he appointed to the Seventh Circuit Court of Appeals, in 2017—wasn’t unexpected, it’s sure to escalate the bitter political conflict surrounding the Republican effort to rush a nomination through the U.S. Senate less than forty days before the election.

Right now, it looks like Mitch McConnell, the Senate Majority Leader, has the votes to do that. The brazen and unapologetic nature of this G.O.P. power play is fanning perfectly justified outrage, and the selection of Barrett—who has ruled in favor of restrictions on abortion and who once served as a law clerk to Justice Antonin Scalia—as a replacement for a liberal icon will further inflame passions. The nominee is expected to appear alongside Trump at the White House on Saturday afternoon. Some Democratic senators have openly considered boycotting the confirmation hearings, which would be unprecedented, at least in the modern era.

With the first Presidential debate, on Tuesday, set to add to the political tension, the next few weeks will be fast-moving and nerve-racking. But it is worth first stepping back and considering the larger context in which all this is taking place. If the aftermath of Ginsburg’s untimely death has taught us anything, it’s that the antiquated institutions of American democracy are in urgent need of repair—that is, if the country can get through the next couple of months with these institutions still intact, which at times this week hasn’t always seemed like a given. The alternative to wholesale reform is almost too ghastly to contemplate: the continuation and intensification of a years-long effort to consolidate minority rule. For, when you strip away all the diversions and disinformation, that is the project that the Republican Party and the forty-fifth President are engaged in.

According to a new poll from ABC News, which was released on Friday, fifty-seven per cent of Americans think that the job of selecting Ginsburg’s replacement should be left to the next President; only thirty-eight per cent think Trump should make the pick. Other surveys have found similar results. The data site FiveThirtyEight examined twelve polls and found that, in the aggregate, fifty-two per cent of respondents favored waiting until after the election to fill Ginsburg’s seat, while thirty-nine per cent said that Trump should fill it immediately.

This is just the current antidemocratic outrage. When you consider the combined influence of the Electoral College and the Senate, both of which amplify the power of Republican voters living in less densely populated parts of the country, the empowerment of the minority—an overwhelmingly white and conservative minority—goes far beyond this instance, egregious as it is.

If the Senate confirms Trump’s nominee, five members of the Court will have been selected by a President who initially won the White House while losing the popular vote. George W. Bush nominated John Roberts, the Chief Justice, and Samuel Alito; Trump picked Neil Gorsuch and Brett Kavanaugh, and will soon nominate Barrett. To be sure, Bush’s two picks came during his second term, which followed a handy popular-vote victory over John Kerry, in 2004. But Bush wouldn’t have been running in the 2004 election as an incumbent if the 2000 election had been decided on the basis of who received the most support nationally. In the popular vote, Al Gore beat him by more than half a million votes.

Why bring up an event that, to younger readers, may seem like ancient history? Because, in this country, American history isn’t something that resides solely in the past. With the nation’s early years having bequeathed to posterity an unrepresentative Electoral College and an unrepresentative U.S. Senate, this history is ever present—shaping some outcomes, ruling out others, and exercising a baleful influence that politicians of bad will, such as McConnell and Trump, can seize upon to further entrench the minority of which they are very much part.

After all, McConnell’s status as the Dark Lord of Capitol Hill depends on a political system that affords the same number of senators to California (population 39.1 million) as it does to Wyoming (population 0.6 million). According to the Real Clear Politics poll average, Trump is trailing Biden by 6.7 percentage points, and he’s been well behind all year. At this stage, his hopes of getting reĆ«lected hinge almost entirely on cobbling together another majority in the Electoral College, to negate the expressed will of the majority. In trying to do this, he has amply demonstrated his willingness to rely on voter suppression, challenge legitimate mail-in ballots, and even possibly call upon Republican legislatures to set aside their states’ election results and appoint slates of loyalist electors to the Electoral College. (In the magazine this week, my colleague Jeffrey Toobin wrote about all these possibilities.)

When points like these are put to Republicans, some of them reply that this is a republic rather than a democracy, which is conceding the point. A somewhat more sophisticated argument is that the United States is a representative democracy rather than a direct democracy, and that the Founders expressly designed the seemingly antidemocratic elements of the political system to protect minorities and prevent mob rule. The proper response to this argument is to invoke actual history rather than fables.

The Founders were men of property and eighteenth-century views. In his book,“The Framers’ Coup: The Making of the United States Constitution,” from 2016, Michael J. Klarman, a professor at Harvard Law School, explains that they “had interests, prejudices, and moral blind spots. They could not foresee the future, and they made mistakes.” Largely drawn from the landed class, they had little interest in empowering the common man, and no interest at all in empowering women and Black people. But, unlike many latter-day “constitutionalists,” they were aware of their own shortcomings. Although they tussled long and hard over the system they created, they didn’t consider it a perfect solution or something that couldn’t be altered in the future, depending on the circumstances and exigencies of the time. “As Jefferson would have recognized,” Klarman writes, “those who wish to sanctify the Constitution are often using it to defend some particular interest that, in their own day, cannot be adequately justified on its own merits.”

In the nearly two centuries since Jefferson’s death, some of the more objectionable aspects of the system that he helped to create have been reformed and updated. It is our misfortune to be living through a period in which Trump and his allies are busy exploiting the system’s remaining weaknesses for their own iniquitous and antidemocratic ends. Disturbing as it is, the rushed nomination of Amy Coney Barrett is but one part of a bigger and even more alarming story.

 

Monday, September 21, 2020

The Venerable Prejudice Against Manual Labour

 

The Venerable Prejudice Against Manual Labour

 BY EMRYS WESTACOTT 3 Quarks Daily



Whether or not a certain line of work is shameful or honorable is culturally relative, varying greatly between places and over time. Farmers, soldiers, actors, dentists, prostitutes, pirates and priests have all been respected or despised in some society or other. There are numerous reasons why certain kinds of work have been looked down on. Subjecting oneself to the will of another; doing tasks that are considered inappropriate given one’s sex, race, age, or class; doing work that is unpopular (tax collector); or deemed immoral (prostitution), or viewed as worthless (what David Graeber labelled “bullshit jobs”), or which are just very poorly paid–all these could be reasons why a kind of work is despised, even by those who do it. One of the oldest prejudices though, at least among the upper classes in many societies, is against manual labour.

The word “manual” derives from manus, Latin for “hand,” and even in English the linguistic connection between physical labor and “hand” persists: we still speak of “farmhands” or “factory hands.” But the concept of manual labor extends to any kind of work that requires bodily strength, or where the physical aspect of the activity is thought to greatly outweigh the cerebral. This sort of work has been looked down on by social elites in many societies from time immemorial. Some reasons for this are fairly obvious. Manual labor is often dirty, unhealthy, exhausting and unpleasant; much of it is also unskilled, tedious, and poorly paid. These are all seen as good reasons for avoiding it if possible, at least as a way to make a living. So it is generally assumed (at least by the privileged few who don’t have to do it) that those who spend their days engaged in work of this kind probably have little choice: they must be either slaves, or serfs, or people of limited ability who are unable to find a better way to put food on the table. And even if they start out with a capacity for “higher things”­–like delicate feelings, or moral virtue– long hours of menial drudgery will crush it out of them.

Upper-class disparagement of manual labor goes back a long way. In the Republic, Plato describes how people’s souls “are bowed and mutilated by their vulgar occupations, even as their bodies are marred by their arts and crafts.”[1]Aristotle considers the lives of mechanics “ignoble and inimical to virtue”; hence, along with farmers and tradesmen, such people are deemed unfit for citizenship.[2] Cicero echoes this view, deprecating anyone paid for menial service, and insisting that “all mechanics are engaged in vulgar business; for a workshop can have nothing respectable about it.”[3] A distaste for working with one’s hands even extended to skilled artists sch as musicians and sculptors. Plutarch writes:

It was not said amiss by Antisthenes, when people told him that one Ismenias was an excellent piper. “It may be so,” said he, “but he is but a wretched human being, otherwise he would not have been an excellent piper….Nor did any generous and ingenuous young man, at the sight of the statue of Zeus at Pisa, ever desire to be a Phidias [the sculptor]…”[4]

A similar contempt for manual labor persists across millennia, at least among intellectuals. In a work published in 1571, William Alley, Bishop of Exeter, opined that “it is illiberal and servile to get the living with hand and sweat of the body.”[5] A well-known passage in Adam Smith’s The Wealth of Nations, published roughly two centuries later describes what he takes to be the mental and moral consequences of repetitious manual tasks:

The man who whose whole life is spent in performing a few simple operations…. generally becomes as stupid an ignorant as it is possible for a human creature to become. The topor of his mind renders him, not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble or tender sentiment, and consequently of forming any just judgement concerning many even of the ordinary duties of private life.[6]

Writing in the mid-nineteenth century, the Massachusetts lawyer and politician Theodore Sedgwick views social rank as corresponding naturally and properly to the kind of work people do:

The more a man labours with his mind, which is mental labour, and the less with his hands, which is bodily labour, the higher he is in the scale of labourers: all must agree to that whether they will or no. This is a real distinction in nature…It is upon this ground, that there ever have been, and ever will be, high and low, rich and poor, masters and servants.[7]

The prejudice against manual labor is not confined to Western civilization. In China, going back to the time of Confucius, educated gentlemen grew their fingernails long, sometimes extraordinarily so, as a sign that they did not work with their hands. This fashion remains popular with many young people today in China, and is still an indicator of social status. In Asia, the Americas, and Europe, the ideal of female beauty for a long time involved fair skin, which contrasted with the darker skin of peasants who worked in the sun, and therefore signified membership of the leisured classes.[8] This was the origin of the fashion among the upper classes for powdering their skin. Some expressions of the aversion to manual labor are so extreme that they are hard to credit. In The Theory of the Leisure Class, Thorstein Veblen describes Polynesian chiefs who, “preferred to starve rather than carry their food to their mouths with their own hands.”[9]


The scientific revolution did something to challenge the traditional privileging of theoretical knowledge over practical knowhow–a prejudice that one can also find in Plato and Aristotle. As observation and experimentation became more central to scientific thinking, scientists decame more engaged in practical tasks, grinding lenses, fashioning instruments, measuring quantities, and so on. But the old prejudice certainly did not disappear. Diderot’s Encylopedia may have sought to raise the status of the “mechanical arts” involved in manufacturing. Yet D’Alembert’s “Preliminary Discourse” to the Encyclopedia (published in 1751) still disparages the people who actually practice these arts:

Most of those who engage in the mechanical arts have embraced them only by necessity and work only by instinct. Hardly a dozen among a thousand can be found who are in a position to express themselves with some clarity upon the instrument they use and the things they manufacture.

As historian of science Mark Young writes: “The idea that for craftspeople, like animals, the process of making is determined by instinct and chance rather than rationality, is one that resonated widely within the intellectual culture of the enlightenment…[10]

Over the past two centuries, the distinction between theoretical and practical knowledge has become less significant. Professions with a practical orientation, such as engineering, dentistry, or farming, typically require a solid understanding of the relevant fundamental scientific principles. True, within academia, universities and liberal arts colleges may still look down their noses at institutions that mainly offer vocational training (as in “If we eliminate this program, we’ll be little more than a trade school!). And philosophers naturally still like to think of their discipline as “queen of the sciences.” But this attitude among humanities scholars should be viewed sympathetically as providing them with a little compensation for their lower salaries and decreasing job security.

The nineteenth and twentieth century saw some countervailing trends opposed to the disparagement of manual labour. Various religious, experimental and utopian communities such as those established by the Shakers and the Amish explicitly affirmed its value as part of an egalitarian outlook. Brook Farm, the transcendentalist utopian community founded near Boston by George Ripley in 1841, was intended “to insure a more natural union between intellectual and manual labor than now exists; to combine the thinker and the worker, as far as possible, in the same individual.”[11]Socialist thinking in general was a powerful force pressing for a more egalitarian view of different kinds of work and for greater honor to be given to hard physical graft. One sees this in the ‘religion of work’ embraced by the Kibbutzim in Israel as well as in the Stakhanovite movement in the Soviet Union (named after coal miner Alexey Stakhanov, who in one shift 1935 reportedly mined 227 tonnes of coal).


An interesting and presumably somewhat representative text that reflects the gradual shift in attitudes is an address titled “Manual Labor” given by Theophilus Abbot, president of Michigan State Agricultural College, to his students some time in the 1870s. On the one hand, Abbot criticizes the longstanding prejudice:

Work then is neither honorable nor base. Put mind into it, and however it may seem to be drudgery, the work becomes intellectual, high, and honored. Manual labor is only in low esteem because of its associations in the public mind. These associations are ignorance and rudeness. [i.e. rusticity][12]

But having offered this critique–the need for which indicates that the prejudice persists–Abbot then proceeds to take part in the old game of ranking different kinds of work, arguing that “labor mixed with brain is more noble than routine, unthinking work.”[13]

The prejudice against manual work (at least by those who don’t have to do it), and the shame associated with certain unskilled, menial tasks (at least in the minds of those who consider themselves above it), has thus certainly diminished somewhat in modern times. For those who don’t have to do it for a living, certain kinds of manual labor can even become an honorable recreation. Winston Churchill famously took up bricklaying. Many professionals who work with words and numbers from Monday to Friday now enjoy turning their compost pile or messing about with a chair saw at the weekend. And it should always be borne in mind that disparaging attitudes towards low-status work are far from universal. Several of the people interviewed by Studs Terkel in Working (published in 1972), including a supermarket cashier, a waitress, a mail carrier, and a gravedigger, recognize that their jobs are unglamorous but express genuine pride in what they do since they see themselves as making a worthwhile social contribution. An elderly immigrant from Sweden who spent her life in domestic service says,

When I first came to this country, being a maid was a low caliber person. I never felt that way. I felt if you could be useful and do an honest job, that was not a disgrace.[14]

Nevertheless, the stigma attached to many kinds of manual labor undeniably persists today in many societies, both rich and poor. In Brazil, for instance, members of the economic elite don’t just hire domestic workers to do household chores like cooking. cleaning, and washing clothes; they positively pride themselves on not being able to do these things themselves. According to anthropologist Donna Goldstein, “this very helplessness has become a positive form of status and prestige for these classes.”[15] Here, as elsewhere, part of the stigma attached to manual labor derives from the time when it was performed by slaves.[16]

In India, although discrimination on the basis of caste has been illegal since 1950, the caste system continues to determine the employment prospects and life chances of millions, with the lowest status jobs–for instance, cleaning human excrement off railway tracks–often being performed exclusively by the lowest caste.[17] More than that, though, it is still common to find among Brahmins, who traditionally have been priests and scholars, a disdain even for skilled manual work. In 2017 a Brahmin woman explained to a journalist why she decided against registering her son at a certain school in these terms:

I took him to vocational school. But when I saw they teach only manual work I brought him back. My son is a Brahmin and will not train and use tools sitting with sons of carpenters and coppersmiths.[18]

In China, the traditional upper-class and urbanite contempt for the peasantry was overturned under Mao, and for a time, the darker skin produced by laboring in the fields became a badge of honor. Since the 1990s, though, there has been a pronounced reversion to older attitudes, particularly among young urbanites who now associate darker skin with migrant labor, rural backwardness, and poverty.[19]

In Europe, North America, and similar modernized societies, academics, politicians, journalists, artists and other intellectuals now generally eschew disparaging remarks about manual labor or those that do it, no matter how menial the job. Anyone today who made public statements about manual workers after the fashion of Cicero or Adam Smith would immediately be condemned as an insufferable elitist. But while those whose work is manual yet highly skilled–for instance, mechanics, surgeons, dentists, artisans, artists, or musicians–enjoy much more respect today than previously, the longstanding habit of looking down on labor regarded as unskilled undeniably persists, especially (but not exclusively) among the upper social tiers. Many good-hearted, open-minded people who are well-educated and comfortably off will of course agree that the people who pick fruit, or sort packages, or clean toilets, or stock shelves, do useful work that deserves to be properly appreciated and rewarded. But as parents they would still find it hard to boast about their own offspring performing such tasks (unless it was understood to be a temporary situation), and hard to avoid feeling some disappointment if a future son or daughter in law had no prospects beyond this sort of work.

And of course, some of those who perform low status jobs may internalize the contempt of others. A janitor interviewed by Terkel has this to say:

Right now I’m doing work that I detest. I’m a janitor. It’s a dirty job…”You’re a bum” – this is the picture I have of myself. I’m a flop because of what I’ve come to……It’s a dead end. Tonight I’m gonna meet a couple of old friends at a bar. I haven’t seen them for a long time. I feel inferior. I’ll bullshit ’em. I’ll say I’m a lawyer or something.[20]

One of the good consequences, perhaps, of the Covid-19 pandemic has been a greater appreciation of the social value of much of the work generally regarded as menial. Of course, this appreciation can easily fade away. In the long run, the best way of ensuring that it persists is to raise the status of such work. And that requires more than merely verbal or symbolic gestures of gratitude. Because today, as in ancient times, one of the main things that both causes and signifies the low status attached to most many jobs is low pay. Raising the status of such work thus requires, ultimately, a commitment to a much more equal society, where the elite aren’t able to cream off obscene amounts for themselves, and where the wealth is more evenly divided to ensure that everyone’s contribution is properly rewarded.

 

[1] Plato, Republic, Book VI, 495e.

[2] Aristotle, Politics, Book 7, Part IX.

[3] Cicero, Of Duties, 42. One should note, however, that although it has often been assumed that the views of intellectuals like Plato, Aristotle and Cicero are representative of their societies, this assumption is questionable. True, Cicero claims to be reporting “the general opinion” regarding kinds of work that are “mean and vulgar.” But numerous other ancient texts indicate that at least some forms of manual labor–for instance, farming and craftsmanship–were well respected back then. Just not by the philosophical elite.

Monday, September 14, 2020

Jack (‘Murph the Surf’) Murphy, Heist Mastermind, Dies at 83

 

Jack (‘Murph the Surf’) Murphy, Heist Mastermind, Dies at 83

He stole the Star of India and other gems from the Museum of Natural History. Two days later he was under arrest.

 


 

Allan Kuhn and Jack “Murph the Surf” Murphy (center left and right), suspects in the Museum of Natural History jewel heist, on arrival at Kennedy Airport in 1964. They were flown in from Florida, accused of stealing the Star of India sapphire and the Star Ruby of Burma from the museum.Credit...Patrick Burns/The New York Times

By Robert D. McFadden NY Times

He called himself “Murph the Surf,” a tanned, roguish, party-loving beach boy from Miami, and he transfixed the nation in 1964 by pulling off the biggest jewel heist in New York City history — the celebrated snatching of the Star of India, a sapphire larger than a golf ball, and a haul of other gems from the American Museum of Natural History.

It was not that 27-year-old Jack Roland Murphy and his accomplices, Allan Kuhn and Roger Clark, were super-thieves, like the ones Maximilian Schell and Melina Mercouri portrayed in the then-current Jules Dassin film, “Topkapi,” about a plot to steal an emerald-encrusted dagger from the Topkapi Palace in Istanbul — a movie the museum thieves had recently seen.

Rather, though they had robbed waterfront mansions in Miami and escaped by speedboat, they left a trail of amateurish clues to the museum theft and were caught by New York detectives two days after the crime, although the loot — worth more than $3 million in today’s dollars — was still missing: stashed by then in waterproof pouches in Miami’s Biscayne Bay. 

And it was not that the job was so well planned. Rather, security for the fourth-floor Hall of Gems was just terrible. Burglar alarms had long ago stopped working, windows at night were left ajar for ventilation, and there were only eight guards for the museum’s dozens of interconnected buildings. One aging guard shined a flashlight into the hall on his occasional rounds. The gems were begging to be stolen.


James A. Oliver, director of the Museum of Natural History, inspecting the case that held jewels stolen from the museum.Credit...Arthur Brower/The New York Times

“Allan said he could hear the jewels talking,” Mr. Murphy told The New York Times in 2019 for a retrospective article 55 years after the infamous break-in. “He said, ‘The jewels are saying, ‘Take us to Miami.’ So I said, ‘Well, let’s take them to Miami.’”

Mr. Murphy died on Saturday at his home in Crystal River, Fla. He was 83. His wife, Kitten, said the cause was heart and organ failure.

Mr. Murphy was an enigma of fabled deeds and crimes. By his own account, he had been a concert violinist with the Pittsburgh Symphony at 18, a star athlete who won the University of Pittsburgh’s first tennis scholarship, and a two-time national surfing champion. A daring thief and self-promoter, an author, a prison missionary and television evangelist, he created his own myths and let the news media and Hollywood embellish them.

He published a short, self-serving memoir, “Jewels for the Journey” (1989), which neglected to mention that he was a convicted murderer; and he was portrayed in films, including Marvin Chomsky’s “Murph the Surf” (1975), a glamorized account of the museum caper with Don Stroud in the title role. Mr. Murphy spent nearly two decades in prison — a short stretch for the museum heist and a long one for a particularly brutal homicide in Miami.

Painted as a Folk Hero

For much of his adult life, Mr. Murphy was a caricature drawn from the publicity that engulfed him. The tabloids, which romanticized the museum theft as a crime of the century, idealized him as a handsome blond adventurer in dark sunglasses who charmed women, smoked dope and loved jazz. Even the mainstream press portrayed him as a kind of folk hero.


 

Detective Vincent Buccigrossi dusts a glass panel for fingerprints during the investigation of Mr. Murphy’s infamous jewelry theft at the Museum of Natural History.Credit...Arthur Brower/The New York Times

But the ordinary hallmarks of identity — a name, a date and place of birth, a history of schools and jobs — were missing or obscured by his misleading and contradictory statements, by his nomadic life of crime, and by dubious claims about his accomplishments, his innocence and the authenticity of his late-in-life religious conversion.

The California Index of Births listed his name as Jack Ronald Murphy. In an interview for this obituary in May, he said his birth name was Jack Rolland Murphy. At some point, he began using Roland for a middle name. For years he cited various birth dates and places, evidently to hide his identity from the law.

He told The Times that his father had been a telephone lineman, but told the East Coast Surf Legends Hall of Fame that he had been an electrical contractor, always on the move. He said he had attended 12 grade schools and three high schools. He claimed to have a photographic memory, but in the Times interview he could not identify any of the schools or the years he attended them.

Was he a genius? Perhaps. The Florida Correctional authorities listed his I.Q. as 143 — in the 99.8 percentile of scores. Did he play a violin with the Pittsburgh Symphony at 18? There is no record of it. Did he win a tennis scholarship to the University of Pittsburgh? Probably. Was he a national surfing champion? Perhaps. Was he the Miami cat burglar who in the 1960s rifled waterfront homes for jewels and escaped by speedboat over a maze of waterways? Almost certainly, investigators said.

Did he conspire with two secretaries to steal securities from a brokerage, and lure them aboard a speedboat, where he and another man bludgeoned and slashed them to death and dumped their bodies in a canal in 1967? Probably. A court convicted him of one of the homicides and sentenced him to life in prison. He served 17 years.

But in 1986 he was released, a born-again Christian with a new persona and vocation, preaching to prison inmates. Was his redemption real or faked? Either way, he gave it a huge effort.

 



 

Mr. Murphy became a born-again Christian, preaching to prison inmates. In 1978 at Miami’s Raiford prison, he admonished a group of teenage boys to turn away from a life of crime.Credit...Associated Press

Mr. Murphy ministered to thousands of inmates over decades, became a television evangelist and appeared with celebrities at prayer breakfasts, once with President Ronald Reagan. His friend the N.F.L. quarterback Roger Staubach of the Dallas Cowboys called his redemption real enough.

California Surfing

Jack Ronald Murphy was born in Los Angeles on May 26, 1937, to Jack Marshall Murphy and Sylvia Ruth Camp, who were married six weeks after his birth. An only child, Jack spent some of his formative years in a disciplinarian household in Carlsbad, Calif., an oceanfront city near San Diego. John Penrod, a childhood friend, told Sports Illustrated in 2020 that he once saw the father slap the boy’s face for washing dishes too slowly.

Jack became a rebel, but also an adept violinist, tennis player and surfer at local beaches. The Murphys moved to Los Alamos, N.M., and to Modesto, Calif., in the 1940s, and to the Pittsburgh suburb of McKeesport when he was a high school senior. He said he won a tennis championship there and a scholarship to the University of Pittsburgh.

But months after matriculating at Pitt in 1955, he dropped out and hitchhiked to Miami Beach. He found jobs teaching swimming, scuba diving, tennis and dancing at cabana clubs and resorts. He also became a high-board stunt diver with a traveling aquatic troupe at swank hotels.

In 1957, Mr. Murphy married Gloria Sostock. They had two sons, Shawn and Michael, and were divorced in 1962. In 1963, he married Linda Leach and was divorced. A relationship with Bonnie Lou Sutera ended with her apparent suicide in 1964. Another with Connie Hopen lasted from 1967 to 1969. In 1987, he married Mary Catherine Luppold Collins, called Kitten.

A complete list of survivors was not immediately available.

In Cocoa Beach, Mr. Murphy opened a surfboard shop and began surfing competitively. The Encyclopedia of Surfing (2005), by Matt Warshaw, says he won the 1962 Daytona Beach, Fla., Surfing Championship, was inducted into the East Coast Surf Legends Hall of Fame in 1996 and won the East Coast Surfing Championship in Virginia Beach, Va., in 1966.

In Miami Beach, he met Mr. Kuhn, a scuba diver with a speedboat, who introduced him to crime. After plundering art works from waterfront homes, instead of selling them to fences they called art insurers and traded their booty for cash, no questions asked.

In the fall of 1964, Mr. Murphy, Mr. Kuhn and Mr. Clark drove to Manhattan hoping for a big score. Renting a penthouse at the Cambridge House Hotel on West 86th Street, they threw all-night drug parties. In Midtown, they robbed bar patrons and burglarized hotel rooms.

 


Gems stolen from the Museum of Natural History, displayed in District Attorney Frank S. Hogan’s office in 1965. From right, the Star of India, larger than a golf ball, and Midnight Star, both sapphires; five emeralds; and two aquamarines.Credit...Ernie Sisto/The New York Times

At the J.P. Morgan Hall of Gems and Minerals at the American Museum of Natural History, on Central Park West, they noted lax security and gawked at what they found there: the Star of India, a 563-carat, oval-shaped blue sapphire, 2.5-inches long (a golf ball is 1.68 inches in diameter); the DeLong Star Ruby, at 100.32 carats; and the 116-carat Midnight Star, one of the world’s largest black sapphires

On the night of Oct. 29, a Thursday, with Mr. Clark on the street as lookout, Mr. Murphy and Mr. Kuhn, carrying a coil of rope, scaled a tall iron fence behind the museum, climbed a fire escape to the fifth floor and inched along a narrow ledge. Tying the rope to a pillar above an open fourth-floor window, Mr. Murphy swung down and used his foot to move the sash.

They were in.

The glass protecting the important gems was a third of an inch thick, too strong to break with a rubber mallet. Instead of risking noise with heavy blows, they used cutters to score circles of glass; duct tape to cover the circles, to prevent shattering and muffle the sound; and a rubber suction cup to pull the pieces out.

They opened three cases and bagged 22 prizes: emeralds, diamonds, rubies, sapphires and gem-laden bracelets, brooches and rings. Finally, they went out the window, climbed down and walked away, encountering several police officers on their beat.

“Good evening, officers,” Mr. Murphy said. They gave him a nod and kept walking.

The next day, as headlines on the heist hit the streets, Mr. Murphy and Mr. Kuhn flew to Miami and stashed the loot in pouches under Mr. Kuhn’s boat in Biscayne Bay.

Their liberty was short.

A hotel clerk tipped off the police. In the penthouse, investigators found a museum floor plan, brochures on its gem collections and sneakers with glass shards on the soles. Their search was interrupted when Roger Clark walked in. He admitted to the theft and said Mr. Murphy and Mr. Kuhn had taken the gems to Miami. A day later, all three were in custody.

Half the missing gems, including the Star of India and the Midnight Star, were recovered by a New York prosecutor, Maurice Nadjari, who promised Mr. Kuhn leniency for revealing the hiding place. Mr. Nadjari found the gems in a Miami bus station locker, where they had been stashed by a Kuhn confederate after retrieving them from under the boat.

Mr. Murphy and his partners served about two years each at Rikers Island in New York. The Star of India and the Midnight Star eventually went back on display at the natural history museum, now more popular than they had ever been before. So did the DeLong Star Ruby, which was recovered in a Miami phone booth after $25,000 in ransom was paid. Another prize, the Eagle Diamond, was never recovered.

 


Mr. Murphy in Crystal River, Fla., in 2019. He transfixed the nation in 1964 by pulling off the biggest jewel heist in New York City history.Credit...Eve Edelheit for The New York Times

Killings at Whiskey Creek

While Mr. Kuhn and Mr. Clark resumed anonymous lives, Mr. Murphy’s crimes deepened. In 1967, he and a Miami thug, Jack Griffith, met Terry Rae Frank and Annelie Mohn, secretaries who had stolen $500,000 in securities from a California brokerage where they worked. Prosecutors later said Mr. Murphy had conspired with the women in the theft, and gave them a hide-out in Miami.

Mr. Murphy and Mr. Griffith took the women on their last ride: a midnight speedboat excursion to Hollywood, north of Miami, ostensibly to discuss disposing of the securities (worth $4 million in today’s dollars). But in a waterway called Whiskey Creek, the women were bludgeoned and hacked to death, and their bodies, anchored with concrete blocks, were dumped overboard.

Traced through the stolen securities, Mr. Murphy and Mr. Griffith were charged with the killings. In a 1969 trial in Fort Lauderdale, they blamed each other for the murders and were both convicted. Mr. Griffith was sentenced to 45 years and Mr. Murphy to life in prison.

After 17 years in Florida prisons, Mr. Murphy was released in 1986, vowing to spend his remaining years on “God’s business.” For three decades, supported by groups like the International Network of Prison Ministries, he traveled from his home in Crystal River to preach to inmates in a dozen countries.

He appeared on Christian broadcasts and at criminal rehabilitation conferences, sometimes with an entourage of major league athletes and popular singers. In 2000, the Florida Parole Board ended his lifetime parole.

In media accounts of Mr. Murphy’s later life, the murders of Ms. Frank and Ms. Mohn became footnotes to the supposedly more alluring tales of his prison ministry and the heist at the American Museum of Natural History.

“The streamlined legend of Murph the Surf has long overshadowed the nuanced conundrum of Jack Roland Murphy’s core,” as Sports Illustrated put it. “Are decades spent sacrificing for others enough to atone for a few moments of savagery on Whiskey Creek?”

Corey Kilgannon contributed reporting.

 

Sunday, September 13, 2020

Welcome to the Next Level of Bullshit

 

Welcome to the Next Level of Bullshit 

The language algorithm GPT-3 continues our descent into a post-truth world. 

BY RAPHAƋL MILLIƈRE Nautilus Magazine



One of the most salient features of our culture is that there is so much bullshit.” These are the opening words of the short book On Bullshit, written by the philosopher Harry Frankfurt. Fifteen years after the publication of this surprise bestseller, the rapid progress of research on artificial intelligence is forcing us to reconsider our conception of bullshit as a hallmark of human speech, with troubling implications. What do philosophical reflections on bullshit have to do with algorithms? As it turns out, quite a lot. 

In May this year the company OpenAI, co-founded by Elon Musk in 2015, introduced a new language model called GPT-3 (for “Generative Pre-trained Transformer 3”). It took the tech world by storm. On the surface, GPT-3 is like a supercharged version of the autocomplete feature on your smartphone; it can generate coherent text based on an initial input. But GPT-3’s text-generating abilities go far beyond anything your phone is capable of. It can disambiguate pronouns, translate, infer, analogize, and even perform some forms of common-sense reasoning and arithmetic. It can generate fake news articles that humans can barely detect above chance. Given a definition, it can use a made-up word in a sentence. It can rewrite a paragraph in the style of a famous author. Yes, it can write creative fiction. Or generate code for a program based on a description of its function. It can even answer queries about general knowledge. The list goes on. 

Not long ago, hardly anyone suspected the hoax when this self-help blog post, written by GPT-3, reached the top of Hacker News, a popular news aggregation website. “You see, creative thinking is all about having fun,” GPT-3 writes. “If you spend too much time on it, then it stops being fun and it just feels like work.”Substack 

GPT-3 is a marvel of engineering due to its breathtaking scale. It contains 175 billion parameters (the weights in the connections between the “neurons” or units of the network) distributed over 96 layers. It produces embeddings in a vector space with 12,288 dimensions. And it was trained on hundreds of billions of words representing a significant subset of the Internet—including the entirety of English Wikipedia, countless books, and a dizzying number of web pages. Training the final model alone is estimated to have cost around $5 million. By all accounts, GPT-3 is a behemoth. Scaling up the size of its network and training data, without fundamental improvements to the years-old architecture, was sufficient to bootstrap the model into unexpectedly remarkable performance on a range of complex tasks, out of the box. Indeed GPT-3 is capable of “few-shot,” and even, in some cases, “zero-shot,” learning, or learning to perform a new task without being given any example of what success looks like. 

Interacting with GPT-3 is a surreal experience. It often feels like one is talking to a human with beliefs and desires. In the 2013 movie Her, the protagonist develops a romantic relationship with a virtual assistant, and is soon disillusioned when he realizes that he was projecting human feelings and motivations onto “her” alien mind. GPT-3 is nowhere near as intelligent as the film’s AI, but it could still find its way into our hearts. Some tech startups like Replika are already working on creating AI companions molded on one’s desired characteristics. There is no doubt that many people would be prone to anthropomorphize even a simple chatbot built with GPT-3. One wonders what consequences this trend might have in a world where social-media interactions with actual humans have already been found to increase social isolation. 

At its core, GPT-3 is an artificial bullshit engine—and a surprisingly good one at that. 

OpenAI is well aware of some of the risks this language model poses. Instead of releasing the model for everyone to use, it has only granted beta access to a select few—a mix of entrepreneurs, researchers, and public figures in the tech world. One might wonder whether this is the right strategy, especially given the company’s rather opaque criteria in granting access to the model. Perhaps letting everyone rigorously test it would better inform how to handle it. In any case, it is only a matter of time before similar language models are widely available; in fact, it is already possible to leverage open services based on GPT-3 (such as AI Dungeon) to get a sense of what it can do. The range of GPT-3’s capacities is genuinely impressive. It has led many commentators to debate whether it really “understands” natural language, reviving old philosophical questions.1 

Gone are the days of “good old-fashioned AI” like ELIZA, developed in the 1960s by Joseph Weizenbaum’s team at the Massachusetts Institute of Technology. ELIZA offered an early glimpse of the future. Using carefully crafted “scripts,” ELIZA could exploit superficial features of language, by latching onto keywords, to produce predetermined answers in written conversations with humans. Despite its rudimentary, programmer-created ruleset, ELIZA was surprisingly effective at fooling some people into thinking that it could actually understand what they were saying—so much so that Weizenbaum felt compelled to write a book cautioning people to not anthropomorphize computer programs. Yet talking with ELIZA long enough could reveal that it was merely parroting human prose. ELIZA couldn’t parse natural language, let alone understand it, beyond simple and repetitive keyword-based tricks. 

Computer science has made staggering progress since then, especially in recent years, and the subfield of natural language processing has been at the forefront. Rather than relying on a set of explicit hand-crafted instructions, modern algorithms use artificial networks loosely inspired by the mammalian brain. These learn how to perform tasks by training themselves on a large amount of data. The sole purpose of this process, known as machine learning, is to find the optimal value of a mathematical function roughly representing how good or bad each output of the model—each attempt to complete the task over some part of the data—is. While artificial neural networks performed poorly when they first came onto the stage in the 1950s, the availability of increasing amounts of computational power and training data eventually vindicated their superiority over traditional algorithms. 

Recognizing sentences written by humans is no longer a trivial task. 

Giving machines speech has, of course, long been considered a significant landmark on the winding path to developing human-level artificial intelligence. Much of the intelligent-seeming things we do, like engaging in complex reasoning and abstract problem-solving, we do using natural language, such as English. 

An old idea, the distributional hypothesis, guided the machine-learning revolution in the realm of natural language processing. Words that occur in a similar context, according to this idea, have a similar meaning. This means that, in principle, an algorithm might learn to represent the meaning of words simply from their distributions in a large amount of text. Researchers applied this insight to machine-learning algorithms designed to learn the meaning of words by predicting the probability of a missing word, given its context (the sentence or group of words in which it appears). 

In 2013, one such algorithm called “word2vec” was trained on a large corpus of news articles. During training, each word from the corpus was turned into a vector (also called an embedding) in a high-dimensional vector space. Words that occurred in similar contexts ended up having neighboring embeddings in that space. As a result, the distance between two word embeddings (measured by the cosine of the angle between them) intuitively reflected the semantic similarity between the corresponding words. The more related the meanings of two words were, the closer their embeddings should be in the space. 

After training, word2vec’s embeddings appeared to capture interesting semantic relationships between words that could be revealed through simple arithmetic operations on the vectors. For example, the embedding for “king” minus the embedding for “man” plus the embedding for “woman” was closest to the embedding for … “queen.” (Intuitively, “king” is to “man” as “queen” is to “woman.”) 

GPT-3 is significantly more complex than word2vec. It is based on an artificial neural network architecture called “Transformer,” introduced in 2017. Neural networks based on this architecture can be “pre-trained” on an enormous amount of text to learn general properties of natural language. Then they can simply be “fine-tuned” on a smaller corpus to improve performance on a specific task—for example, classifying news articles by topic, summarizing paragraphs, or predicting the sentences that follow a given input. While GPT-3 does not revolutionize the Transformer architecture, it is so large, and was trained on so much data, that it can achieve performance near or above previous fine-tuned models, without any fine-tuning. 

Weizenbaum’s old worries about people anthropomorphizing ELIZA are all the more pressing when it comes to GPT-3’s vastly superior abilities. But does GPT-3 understand what it says? The answer largely depends on how much we build into the notion of understanding. 

GPT-3 seems to capture an impressive amount of latent knowledge about the world, knowledge that is implicitly encoded in statistical patterns in the distribution of words across its gargantuan training corpus. Nonetheless, there are good reasons to doubt that GPT-3 represents the meaning of the words it uses in a way that is functionally similar to humans’ word representations. At the very least, children learn language through a rather different process, mapping words to concepts that embed knowledge acquired not only through reading text, but also crucially through perceiving and exploring the world. 

Consider how you learned what the word “dog” means. You presumably did not learn it merely by reading or hearing about dogs, let alone remembering the statistical distribution of the word “dog” in sentences you read or heard, but by seeing a real dog or a picture of one, and being told what it is. Your lexical concept dog does not merely encode the similarity between the meaning of the word “dog” and that of other words like “cat.” It embeds structured knowledge about dogs partially grounded in perceptual experience, including the knowledge that dogs have four legs, eat meat, and bark—all things you probably observed. 

By David Auerbach 

The history of Artificial Intelligence,” said my computer science professor on the first day of class, “is a history of failure.” This harsh judgment summed up 50 years of trying to get computers to think. Sure, they could crunch numbers...READ MORE

GPT-3’s word embeddings are not perceptually grounded in the world, which explains why it often struggles to consistently answer common-sense questions about visual and physical features of familiar objects. It also lacks the kind of intentions, goals, beliefs, and desires that drive language use in humans. Its utterances have no “purpose.” It does not “think” before speaking, insofar as this involves entertaining an idea and matching words to the components of a proposition that expresses it. Yet its intricate and hierarchically-structured internal representations allow it to compose sentences in a way that often feels natural, and display sophisticated modeling of the relationships between words over whole paragraphs. 

If the family of GPT language models had a motto, it could be “Fake it till you make it.” GPT-3 is certainly good at faking the semantic competence of humans, and it might not be an exaggeration to say that it has acquired its own form of semantic competence in the process. 

In the first season of the TV show Westworld, the human protagonist visits a dystopian amusement park populated by hyper-realistic androids. Greeted by a human-like android host, he asks her, incredulous, whether she is real. She replies in a mysterious voice: “If you can’t tell, does it matter?” Whether or not GPT-3 understands and uses language like we do, the mere fact that it is often good enough to fool us has fascinating—and potentially troubling—implications. 

This is where Frankfurt’s notion of bullshit is helpful. According to Frankfurt, bullshit is speech intended to persuade without regard for truth. In that sense, there is an important difference between a liar and a bullshitter: The liar does care about the truth insofar as they want to hide it, whereas the bullshitter only cares about persuading their listener. Importantly, this does not entail that bullshitters never tell the truth; in fact, good bullshitters seamlessly weave accurate and inaccurate information together. For this very reason, as Frankfurt puts it, “Bullshit is a greater enemy of truth than lies are.” 

GPT-3 can generate fake news articles that humans can barely detect above chance. 

At its core, GPT-3 is an artificial bullshit engine—and a surprisingly good one at that. Of course, the model has no intention to deceive or convince. But like a human bullshitter, it also has no intrinsic concern for truth or falsity. While part of GPT-3’s training data (Wikipedia in particular) contains mostly accurate information, and while it is possible to nudge the model toward factual accuracy with the right prompts, it is definitely no oracle. Without independent fact-checking, there is no guarantee that what GPT-3 says, even if it “sounds right,” is actually true. This is why GPT-3 shines when writing creative fiction, where factual accuracy is less of a concern. But GPT-3’s outputs are distinct enough from human concerns and motivations in language production, while being superficially close enough to human speech, that they can have potentially detrimental effects on a large scale. 

First, the mass deployment of language models like GPT-3 has the potential to flood the Internet, including online interactions on social media, with noise. This goes beyond obvious worries about the malicious use of such models for propaganda. Imagine a world in which any comment on Twitter or Reddit, or any news article shared on Facebook, has a non-trivial probability of being entirely written by an algorithm that has no intrinsic concern for human values. 

That scenario is no longer science fiction. Just a few weeks ago, a self-help blog post written by GPT-3 reached the top of Hacker News, a popular news aggregation website.2 Hardly anyone suspected the hoax. We have to come to terms with the fact that recognizing sentences written by humans is no longer a trivial task. As a pernicious side-effect, online interactions between real humans might be degraded by the lingering threat of artificial bullshit. Instead of actually acknowledging other people’s intentions, goals, sensibilities, and arguments in conversation, one might simply resort to a reductio ad machinam, accusing one’s interlocutor of being a computer. As such, artificial bullshit has the potential to undermine free human speech online. 

GPT-3 also raises concerns about the future of essay writing in the education system. For example, I was able to use an online service based on GPT-3 to produce an impressive philosophical essay about GPT-3 itself with minimal effort (involving some cherry-picking over several trials). As several of my colleagues commented, the result is good enough that it could pass for an essay written by a first-year undergraduate, and even get a pretty decent grade. The Guardian recently published an op-ed on artificial intelligence produced by stitching together paragraphs from several outputs generated by GPT-3. As they note, “Editing GPT-3’s op-ed was no different to editing a human op-ed”—and overall, the result is coherent, relevant and well-written. Soon enough, language models might be to essays as calculators are to arithmetic: They could be used to cheat on homework assignments, unless those are designed in such a way that artificial bullshit is unhelpful. But it is not immediately obvious how one could guarantee that. 

To conclude this article, I prompted GPT-3 to complete the first sentence of Frankurt’s essay. Here is one of the several outputs it came up with: “Bullshitting is not always wrong, though sometimes it can be harmful. But even when it is harmless, it still has some serious consequences. One of those consequences is that it prevents people from being able to distinguish between what’s real and what isn’t.” That’s more bullshit, of course; but it fittingly rings true. 

RaphaĆ«l MilliĆØre is a Presidential Scholar in Society and Neuroscience in the Center for Science and Society at Columbia University, where he conducts research on the philosophy of cognitive science. Follow him on Twitter @raphamilliere

Footnotes 

1. According to philosopher John Searle’s “Chinese room argument,” no computer could ever understand a language by running a program. This is because such a computer would be analogous to a human operator in a room following a set of English instructions for manipulating Chinese symbols on the basis of their syntax alone, taking Chinese characters as input and producing other Chinese characters as output, without understanding Chinese in the process. Searle’s argument was originally aimed at old-fashioned symbolic algorithms like ELIZA. It could be adapted to modern language models (but the thought experiment would be all the more difficult to conceive). 

In any case, many philosophers rejected Searle’s conclusion for various reasons, including the suggestion that the human operator in the room is merely analogous to a specific component of the computer (the central processing unit, or CPU), and that a complete natural-language processing system—including not only the CPU but also the instructions it follows, and the memory containing intermediate states of its computations—could genuinely understand Chinese. Nonetheless, those who reject the conclusion of Searle’s argument still have room to disagree on which system would qualify as understanding natural language, and whether a computer specifically running GPT-3 would make the grade. 

2. Porr, L. Feeling unproductive? Maybe you should stop overthinking. https://adolos.substack.com/ (2020).