Wednesday, April 26, 2017

How Wrestling Explains Alex Jones and Donald Trump

By NICK ROGERS NY Times

Alex Jones, the conspiracist at the helm of the alt-news outlet InfoWars, used an unusual defense in a custody hearing in Texas last week. His ex-wife had accused him of being unstable and dangerous, citing Mr. Jones’s rants on his daily call-in show. (Among his many unconventional stances are that the government staged the Sandy Hook massacre and orchestrated the 9/11 attacks.) Through his attorneys, Mr. Jones countered that his antics are irrelevant to his fitness as a parent, because he is a performance artist whose public behavior is part of his fictional character. In other words, when he tells his audience that Hillary Clinton is running a sex-trafficking operation out of a Washington pizza parlor (an accusation for which he has offered a rare retraction), he is doing so merely for entertainment value.

Many of his liberal critics have since asked whether Mr. Jones’s devoted fans will abandon him now that he has essentially admitted to being a fraud.
They will not.

Alex Jones’s audience adores him because of his artifice, not in spite of it. They admire a man who can identify their most primal feelings, validate them, and choreograph their release. To understand this, and to understand the political success of other figures like Donald Trump, it is helpful to know a term from the world of professional wrestling: “kayfabe.”

Although the etymology of the word is a matter of debate, for at least 50 years “kayfabe” has referred to the unspoken contract between wrestlers and spectators: We’ll present you something clearly fake under the insistence that it’s real, and you will experience genuine emotion. Neither party acknowledges the bargain, or else the magic is ruined.

To a wrestling audience, the fake and the real coexist peacefully. If you ask a fan whether a match or backstage brawl was scripted, the question will seem irrelevant. You may as well ask a roller-coaster enthusiast whether he knows he’s not really on a runaway mine car. The artifice is not only understood but appreciated: The performer cares enough about the viewer’s emotions to want to influence them. Kayfabe isn’t about factual verifiability; it’s about emotional fidelity.

Although their athleticism is impressive, skilled wrestlers captivate because they do what sociologists call “emotional labor” — the professional management of other people’s feelings. Diners expect emotional labor from their servers, Hulkamaniacs demand it from their favorite performer, and a whole lot of voters desire it from their leaders.

The aesthetic of World Wrestling Entertainment seems to be spreading from the ring to the world stage. Ask an average Trump supporter whether he or she thinks the president actually plans to build a giant wall and have Mexico pay for it, and you might get an answer that boils down to, “I don’t think so, but I believe so.” That’s kayfabe. Chants of “Build the Wall” aren’t about erecting a structure; they’re about how cathartic it feels, in the moment, to yell with venom against a common enemy.

Voting to repeal Obamacare again and again only to face President Obama’s veto was kayfabe. So is shouting “You lie!” during a health care speech. It is President Bush in a flight suit, it is Vladimir Putin shirtless on a horse, it is virtually everything Kim Jong-un does. Does the intended audience know that what they’re watching is literally made for TV? Sure, in the same way they know that the wrestler Kane isn’t literally a demon. The factual fabrication is necessary to elicit an emotional clarity.

Despite superficial similarities, it is useful to distinguish kayfabe from the concept of satire. Satire depends on the constant awareness that what’s being presented is false. It requires frequent acknowledgment of that: winks to the camera, giggling breaks of character. The meaning comes directly from the disbelief. It depends on two conflicting mental processes happening at once, rather than the suspension of one in service of the other. It employs cognitive dissonance, rather than bypassing it. In that way, satire and kayfabe are actually opposites. Kayfabe isn’t merely a suspension of disbelief, it is philosophy about truth itself. It rests on the assumption that feelings are inherently more trustworthy than facts.

Donald Trump rode kayfabe from Queens to Trump Tower to “The Apprentice” to the White House. Alex Jones may find it is as effective in the courtroom as it is on AM radio. Cultural elites can fact-check these men and point out glaring rhetorical contradictions until they are blue in the face; kayfabe renders it all beside the point. If you’re among the three million people who have chuckled at the viral video of a crying man addressing his wrestling heroes at a Q. and A. session, you know how succinctly he summarizes the mind-set: “It’s still real to me, dammit.”

Are truth and kayfabe, then, irreconcilable? In some contexts, probably. But devotees of the former might be well served to think of the latter as complementary rather than competing. Rationalists can and should make the case that empirical data is more reliable than intuition. But if they continue to ignore the human need for things to feel true, they will do so at their political peril.

Nick Rogers is a sociologist and lawyer on Long Island.

Monday, April 24, 2017

Ayn Rand

Ayn Rand’ s Counter-Revolution
Jennifer Burns NY Times

STANFORD, Calif. — The crowds jostling below, the soldiers marching down icy boulevards, the roar of a people possessed: All this a young Ayn Rand witnessed from her family’s apartment, perched high above the madness near Nevsky Prospekt, a central thoroughfare of Petrograd, the Russian city formerly known as St. Petersburg.

These February days were the first turn of a revolutionary cycle that would end in November and split world history into before and after, pitting soldier against citizen, republican against Bolshevik, Russian against Russian. But it wasn’t until Rand became a New Yorker, some 17 years later, that she realized the revolution had cleaved not only Russian society, but also intellectual life in her adopted homeland of the United States.

We usually think of the 1950s as the decade of anti-Communism, defined by Senator Joseph McCarthy, the Hollywood blacklist and the purging of suspected Communists from unions, schools and universities. The prelude to all of that was the 1930s, when the nation’s intellectuals first grappled with the meaning and significance of Russia’s revolution. And it was in this decade that Ayn Rand came to political consciousness, reworking her opposition to Soviet Communism into a powerful defense of the individual that would inspire generations of American conservatives.

Rand is best known as the author of “The Fountainhead” and “Atlas Shrugged,” but before these came “We the Living,” the novel perhaps closest to her heart. It was certainly the novel closest to her life: The protagonist, a gifted engineering student named Kira Argounova, lived the life that might well have been Rand’s, had she stayed in the U.S.S.R. But whereas Kira died a dramatic death trying to escape over the snowy border into Latvia, Rand succeeded in emigrating in 1926 and soon made it to Hollywood, the “American movie city” she had written about as a Russian film student, which became her first home in the United States.

By the mid-1930s, after setting about a successful writing career and becoming an American citizen, Rand was ready to explain the country she left behind. “We the Living” depicted the quotidian gray of life after the drama of the October Revolution had faded. What was left was the cynical machinations of party insiders and the struggle to maintain a facade of gentility — one hostess served potato-skin cookies to guests, who “kept their arms pressed to their sides to hide the holes in their armpits; elbows motionless on their knees — to hide rubbed patches; feet deep under chairs — to hide worn felt boots.”
At the novel’s heart was the quiet despair of hopes crushed by new lines of class and caste, as students like Kira, punished for her family’s former prosperity, had their futures stripped away. For Rand, “We the Living” was more than a novel, it was a mission.

“No one has ever come out of Soviet Russia to tell it to the world,” she told her literary agent. “That was my job.”

Only, in 1930s America, few wanted to hear what she had to say. When the novel was published in 1936, capitalism itself was in crisis. The Great Depression had cast its dark shadow over the American dream. Bread lines snaked through the cities; Midwestern farms blew away in clouds of dust. Desperate men drifted across the country and filled up squatters’ camps of the homeless and workless on the outskirts of small towns, terrifying those who still had something to lose.

In this moment, Soviet Russia stood out to the nation’s thinking class as a sign of hope. Communism, it was believed, had helped Russia avoid the worst ravages of the crash. Tides of educated opinion began running strong to the left.

“These were the first quotas of the great drift from Columbia, Harvard and elsewhere,” the American writer — and former Soviet spy — Whittaker Chambers wrote in his 1952 book “Witness.” “A small intellectual army passed over to the Communist Party with scarcely any effort on its part.”

This intellectual army had little interest in a melodramatic novel about the sufferings of the bourgeoisie. Worse, views of the book reflected an ideological divide that Rand had not known existed. Rand had taken for granted there would be “pinks” in America, but she hadn’t known they would matter, certainly not in New York City, one of the literary capitals of the world.

But the champions she found were outsiders of that milieu, like the newspaper columnist H. L. Mencken. Even reviewers who enjoyed her writing, though, generally assumed Rand’s rendition of Soviet Russia in “We the Living” was exaggerated or no longer true, now that Communism had matured.

Rand had thus stumbled, unwittingly, into a drama that would shape American thought and politics for the rest of the century: a bitter love triangle between Communists, ex-Communists and anti-Communists.

First came the Communists, often literary men like Chambers, John Reed (of “Ten Days That Shook the World” fame) or Will Herberg. A handful of the most prominent Bolshevik enthusiasts were women, including the dancer Isadora Duncan and Gerda Lerner, a later pioneer of women’s history.
Next were the ex-Communists. For many, 1939 was the fateful year, when Soviet Russia signed a nonaggression pact with Nazi Germany, previously its mortal enemy. The reversal was too much for all but the most hardened American leftists — after all, it was the fight against fascism that had drawn many to the cause in the first place. (In an interesting twist, Italian filmmakers produced a pirated film adaptation of “We the Living” as an anti-fascist statement, which was later banned by Mussolini’s government.) The great drift into the Communist Party U.S.A. became the great drift out of it.

Still, to be an ex-Communist was not necessarily to be an anti-Communist, at least not immediately. Rand was one of the first, and not because she had lost her faith, but because she was an émigré who had witnessed the Russian Revolution from the inside.

Finally, in the 1950s, anti-Communism became a full-fledged intellectual and political movement. Chambers made the most spectacular move from Communist to ex-Communist, to anti-Communist, revealing his participation in an espionage ring and implicating several high-ranking government employees, including Alger Hiss, the former State Department official who was accused of being a Soviet agent.

Chambers’s revelations helped touch off McCarthy’s crusade against suspected Communists in government. Rand herself got in on the action, testifying before the House Un-American Activities Committee about Communist infiltration of Hollywood.

And here unfolded the last act of the drama: the eventual emergence of anti-anti-Communism. It was one thing to reject a political movement gone horribly wrong. It was something different to turn on one’s former friends and associates, in the process giving “aid and comfort to cold warriors,” as the writer and historian Tony Judt wrote. And so even as Communism fell out of favor, among intellectuals anti-Communism became as unfashionable as it had been in the 1930s.

Once again, Rand was a talismanic presence. By the 1950s, her anti-Communism had evolved into a full-throated celebration of capitalism, buttressed by her original credibility as a survivor of Soviet collectivism. She had traded in the elegiac historical fiction of “We the Living” for another Soviet inheritance: agitprop novels, dedicated to showcasing heroic individualists and entrepreneurs. By 1957, she had fully realized the form in “Atlas Shrugged,” an epic that weighed in at Tolstoyan proportions.

Rand had found her voice — and her audience. “Atlas Shrugged” became a best seller, despite poor reviews — Rand would never get the critical respect she craved. The gap between Rand and her fellow novelists and writers, first evident in the 1930s, would never close.

Thursday, April 20, 2017


There's nothing inherently attractive about working down a coal mine. I've never done it myself, but it seems like hard physical labour, often in cramped conditions, with a view from the office that leaves a lot to be desired. In the short run, there's the danger of explosions and collapses to contend with, while black lung disease is the long-run killer that ensures there's always room at the bar in a miners' welfare club.

Yet when President Trump flourished his executive pen a few weeks ago to roll back Obama-era environmental protection regulations, he was surrounded by West Virginian miners excited at the prospect of getting back down the pit. For Trump, the politics were obvious. These were the people he had diligently courted for nearly two years and whom he had brandished as icons of the besieged American worker. In the Republican primaries, Trump won over 90% of the votes in McDowell County, the traditional heart of West Virginia coal country, on the back of his promise to put them back to work and end the 'war on coal'. 

In the Trump coalition, it would be hard to find a more archetypal white working-class community than the miners. In 2016, the Bureau of Labour Statistics reported that minority employment (African American, Asian, and Latino) in the coal mining industry was a mere 5% versus 35% in the overall US workforce. Rolling back clean energy regulations is also a chance for Trump to burnish his credentials with traditional Republicans for whom even the rhetorical resurrection of the coal industry is a middle finger to the lefty pinko climate change panic merchants ruining the American economy. Nothing says we deny global warming like a smoke stack belching sulphurous coal fumes into the atmosphere. 

Yet the miners flanking Trump were just props in a piece of pure political theatre, with the further irony that Trump didn't even get to the part in the script where he takes away their medical coverage and defunds the Appalachian Economic Development Agency tasked with creating alternative jobs. Anyone with even a cursory understanding of energy economics knows that – regardless of what Trump says or does – the US coal industry is not about to rise Lazarus-like from decades of decline, and the villain isn't increased regulations, but rather the market economics beloved of traditional Republicans. 

The reality is that the combination of abundant natural gas, plus increasingly cheap renewable energy, has made coal uneconomic. 2016 was the first year in which natural gas generated more electricity than coal in the US, and as power generation accounts for the overwhelming majority of coal consumption, the result is production at multi-decade lows, and a string of major coal producers filing for bankruptcy over the last few years. 

The impact on the miners of declining production is amplified by increased automation. When US mining companies stopped digging under mountains and instead started blowing the tops off them to shovel up the debris, far fewer workers were required. By far the most cost effective coal producing region in the US is the Powder River Basin in Montana and Wyoming, where vast strip mines scar the landscape. But in the traditional mining heartlands like Eastern Kentucky, the world has moved on. By the end of 2016, coal mining employment in Kentucky was at its lowest level in over a century. 

To put the industry into perspective, the American coal sector now employs only 53,000 people. It still dwarfs the UK coal industry which is down to a few thousand workers in a handful of open cast mines, but in US economic terms it's now a rounding error. Last year the industry employed 2,000 fewer people than the 'retail sewing' sector, yet you don't see the quilt makers of America jostling for photo ops in the Oval Office.

So why has Trump latched onto the idea that coal mining is an industry that must be saved at all costs? It's for the same reason that he pedals the fantasy of bringing steel mills back to Ohio – a nostalgia for an industrial America that shaped and supported working-class communities and which technological progress is inexorably erasing. The truth, whether it's miners in West Virginia or shipyard workers on the Clyde, is that hard physical work in homogenous communities created social bonds and social capital that isn't easily replaced. Whether those communities are in the remote valleys of Appalachia or the somewhat less remote valleys of South Wales, the closing of a mine wreaks social havoc. 

Thirty years on from Arthur Scargill's failed strike, the parallels are clear. It wasn't just about jobs and economics, it was about trying to protect a way of life that, while dirty and dangerous, was also unique. The jobs may eventually be replaced, but loose networks of Uber drivers are unlikely to spawn many male voice choirs, world-class brass bands, or the social and cultural corona that surrounded the pits. A key insight from J D Vance's recent award-winning Appalachian family history, 'Hillbilly Elegy', is that when the social capital anchored in a concentration of traditional jobs gets eroded, whole communities can descend into a purposeless existence in which drug abuse, alcoholism and domestic violence find fertile ground.

The political current that Trump has expertly tapped into is the need for 'meaningful work' rather than just a job. When you're powering the country, launching ocean liners, or building cars to export to the world, there's a sense of purpose that a zero-hours contract stacking the shelves in a Walmart struggles to replicate. Sherrod Brown, a Democratic senator from Ohio, wrote in a recent op-ed for the New York Times, 'People take pride in the things they make, in serving their communities in hospitals or schools, and in making their contribution to society. When we devalue work, we threaten the pride and dignity that come from it.'

In the long run of economic history, the decline of mining and the erosion of the social fabric around it is nothing special. Neither is the search for 'meaningful work'. In 1853, a full century before what many Americans would consider the pinnacle of their industrial power, Henry David Thoreau, the American poet and essayist, lamented that 'most men would feel insulted if it were proposed to employ them in throwing stones over a wall, and then in throwing them back, merely that they might earn their wages. But many are no more worthily employed now.' 

The nature of economic progress has always been that we fail to imagine what might come next to fill the void caused by the decline of what came before. In Dundee's heyday of jute, jam and journalism, the creation of Grand Theft Auto and the emergence of a thriving video games cluster would have been literally unimaginable. That is why Trump's luddite tendencies have been with us for centuries, sarcastically illustrated by French 19th-century economist Frederic Bastiat, whose 'candlemaker's petition' demanded that the French parliament blot out the sun to protect lighting jobs.

The economic cycle of creative destruction is personally very disruptive if you're a miner in Eastern Kentucky, or a steel worker in Motherwell, but at the national level it's been a consistent source of new employment. In the 15 years after containers were introduced, 90% of dock workers in the US lost their jobs, and the social ecosystem of pubs, crime, and prostitution that used to characterise the docks of the world gave way to vast automated ports in which people are few and far between. Yet the Brooklyn waterfront warehouses that once stored dry goods now hum with internet start-ups (and the occasional failed presidential campaign), and Finnieston in Glasgow is now becoming as famous for its trendy restaurants as it was for its crane. 

As dock jobs evaporated, many workers found themselves behind a wheel, as containerisation fuelled the growth of the long-distance truck industry that now employs nearly 2 million people in the US, but which itself is soon likely to be disrupted by driverless vehicles. For the remaining miners, the long wave of demand for coal kicked off by Trevithick, Watt, and Stephenson in the 19th century has crested and receded, and their jobs will inevitably go the way of the longshoreman, hopefully to be replaced by something unimagined but meaningful. 

Yet the economic pessimists share Thoreau's concern that this time it might be different. That Trump's electoral appeal as an economic King Canute standing athwart the tide of progress reflects a fear that we've now reached a point where automation will prevent the economy creating new 'meaningful jobs'. That fear may seem strange in an America where unemployment is now below 5%, but as economist Thomas Piketty has pointed out, the long-run growth in income inequality can be directly traced to a decline in the return to labour in favour of the return to capital – a trend that has seen the rich get richer and the working poor scrambling to make ends meet. 

If the supply of 'meaningful jobs' does indeed start drying up because of increased automation, it will raise broader societal questions. In India and Scandinavia there's already active discussion of moving to a guaranteed minimum income as a redistributive mechanism, but that alone won't address the issue of the dignity of work, hence the attraction of a demagogue who promises to bring back 'real jobs'. If those jobs don't appear organically, we could find ourselves back to something that looks like Franklin Roosevelt's Works Progress Administration of the 1930s. Not just a guaranteed income, but a guaranteed job to go with it, and the promise of doing something worthwhile, rather than just being a benefits scrounger.

As the need for labour declines, we'll also need to find ways to create social capital, as rotary clubs and bowling leagues don't spring spontaneously from lines at the benefits office. The optimists see a future where, like retired people searching for continued meaning in their life, we become a society of woodworkers, local history researchers and artists. But that transition will require a level of political acumen that seems to be beyond the current cohort of politicians in the US who are content to try and turn back the clock and collectively send us straight down the mines again.

Tuesday, April 04, 2017

Russia Moves to Ban Jehovah’s Witnesses as ‘Extremist’

By ANDREW HIGGINS NY TIMES

VOROKHOBINO, Russia — A dedicated pacifist who has never even held a gun, Andrei Sivak discovered that his government considered him a dangerous extremist when he tried to change some money and the teller “suddenly looked up at me with a face full of fear.”

His name had popped up on the exchangalee bureau’s computer system, along with those of members of Al Qaeda, the Islamic State and other militant groups responsible for shocking acts of violence.

The only group the 43-year-old father of three has ever belonged to, however, is (Photo of Charles Russell JW Founder 1874) Jehovah’s Witnesses, a Christian denomination committed to the belief that the Bible must be taken literally, particularly its injunction “Thou shalt not kill.”

Yet, in a throwback to the days of the Soviet Union, when Jehovah’s Witnesses were hounded as spies and malcontents by the K.G.B., the denomination is at the center of an escalating campaign by the authorities to curtail religious groups that compete with the Russian Orthodox Church and that challenge President Vladimir V. Putin’s efforts to rally the country behind traditional and often militaristic patriotic values.

The Justice Ministry on Thursday put the headquarters of Jehovah’s Witnesses in Russia, an office complex near St. Petersburg, on a list of the bodies banned “in connection with the carrying out of extremist activities.”

Last month, the ministry asked the Supreme Court to outlaw the religious organization and stop its more than 170,000 Russian members from spreading “extremist” texts. The court is scheduled to hear — and is likely to rule on — the case on Wednesday.

Extremism, as defined by a law passed in 2002 but amended and expanded several times since, has become a catchall charge that can be deployed against just about anybody, as it has been against some of those involved in recent anti-corruption protests in Moscow and scores of other cities.
Several students who took part in demonstrations in the Siberian city of Tomsk are now being investigated by a special anti-extremism unit while Leonid Volkov, the senior aide to the jailed protest leader Aleksei A. Navalny, said he had himself been detained last week under the extremism law.

In the case of Jehovah’s Witnesses, the putative extremism seems to derive mostly from the group’s absolute opposition to violence, a stand that infuriated Soviet and now Russian authorities whose legitimacy rests in large part on the celebration of martial triumphs, most notably over Nazi Germany in World War II but also over rebels in Syria.

Jehovah’s Witnesses, members of a denomination founded in the United States in the 19th century and active in Russia for more than 100 years, refuse military service, do not vote and view God as the only true leader. They shun the patriotic festivals promoted with gusto by the Kremlin, like the annual celebration of victory in 1945 and recent events to celebrate the annexation of Crimea in March 2014.

Mr. Sivak, who says he lost his job as a physical education teacher because of his role as a Jehovah’s Witnesses elder, said he had voted for Mr. Putin in 2000, three years before joining the denomination. He added that while he has not voted since, nor has he supported anti-Kremlin activities of the sort that usually attract the attention of Russia’s post-Soviet version of the K.G.B., the Federal Security Service, or F.S.B.

“I have absolutely no interest in politics,” he said during a recent Jehovah’s Witnesses Friday service in a wooden country house in Vorokhobino, a snow-covered village north of Moscow. Around 100 worshipers crammed into a long, chilly room under fluorescent lights to listen to readings from the Bible, sing and watch a video advising them to dress for worship as they would for a meeting with the president.

“From the Russian state’s perspective, Jehovah’s Witnesses are completely separate,” said Geraldine Fagan, the author of “Believing in Russia — Religious Policy After Communism.” She added, “They don’t get involved in politics, but this is itself seen as a suspicious political deviation.”

“The idea of independent and public religious activity that is completely outside the control of — and also indifferent to — the state sets all sorts of alarm bells ringing in the Orthodox Church and the security services,” she said.

That the worldwide headquarters of Jehovah’s Witnesses is in the United States and that its publications are mostly prepared there, Ms. Fagan added, “all adds up to a big conspiracy theory” for the increasingly assertive F.S.B.
For Mr. Sivak, it has added up to a long legal nightmare. His troubles began, he said, when undercover security officers posed as worshipers and secretly filmed a service where he was helping to officiate in 2010.

Accused of “inciting hatred and disparaging the human dignity of citizens,” he was put on trial for extremism along with a second elder, Vyacheslav Stepanov, 40. The prosecutor’s case, heard by a municipal court in Sergiyev Posad, a center of the Russian Orthodox Church, produced no evidence of extremism and focused instead on the insufficient patriotism of Jehovah’s Witnesses.

“Their disregard for the state,” a report prepared for the prosecution said, “erodes any sense of civic affiliation and promotes the destruction of national and state security.”

In a ruling last year, the court found the two men not guilty and their ordeal seemed over — until Mr. Sivak tried to change money and was told that he had been placed on a list of “terrorists and extremists.”

He and Mr. Stepanov now face new charges of extremism and are to appear before a regional court this month. “There is a big wave of repression breaking,” Mr. Stepanov said.

In response to written questions, the Justice Ministry in Moscow said a yearlong review of documents at the Jehovah’s Witnesses “administrative center” near St. Petersburg had uncovered violations of a Russian law banning extremism. As a result, it added, the center should be “liquidated,” along with nearly 400 locally registered branches of the group and other structures.

For the denomination’s leaders inside Russia, the sharp escalation in a long campaign of harassment, previously driven mostly by local officials, drew horrifying flashbacks to the Soviet era.

Vasily Kalin, the chairman of Jehovah’s Witnesses’ Russian arm, recalled that his whole family had been deported to Siberia when he was a child. “It is sad and reprehensible that my children and grandchildren should be facing a similar fate,” he said. “Never did I expect that we would again face the threat of religious persecution in modern Russia.”
In Russia, as in many countries, the door-to-door proselytizing of Jehovah’s Witnesses often causes irritation, and their theological idiosyncrasies disturb many mainstream Christians. The group has also been widely criticized for saying that the Bible prohibits blood transfusions. But it has never promoted violent or even peaceful political resistance.

“I cannot imagine that anyone really thinks they are a threat,” said Alexander Verkhovsky, director of the SOVA Center for Information and Analysis, which monitors extremism in Russia. “But they are seen as a good target. They are pacifists, so they cannot be radicalized, no matter what you do to them. They can be used to send a message.”

That message, it would seem, is that everyone needs to get with the Putin program — or risk being branded as an extremist if they display indifference, never mind hostility, to the Kremlin’s drive to make Russia a great power again.

“A big reason they are being targeted is simply that they are an easy target,” Ms. Fagan said. “They don’t vote, so nobody is going to lose votes by attacking them.”

Attacking Jehovah’s Witnesses also sends a signal that even the mildest deviation from the norm, if proclaimed publicly and insistently, can be punished under the anti-extremism law, which was passed after Russia’s second war in Chechnya and the Sept. 11 attacks in the United States.

Billed as a move by Russia to join a worldwide struggle against terrorism, the law prohibited “incitement of racial, national or religious strife, and social hatred associated with violence or calls for violence.”

But the reference to violence was later deleted, opening the way for the authorities to classify as extremist any group claiming to offer a unique, true path to religious or political salvation.

Even the Russian Orthodox Church has sometimes fallen afoul of the law: The slogan “Orthodoxy or Death!” — a rallying cry embraced by some hard-line believers — has been banned as an illegal extremist text.
To help protect the Orthodox Church and other established religions, Parliament passed a law in 2015 to exempt the Bible and the Quran, as well as Jewish and Buddhist scripture, from charges of extremism based on their claims to offer the only true faith.

The main impetus for the current crackdown, however, appears to come from the security services, not the Orthodox Church. Roman Lunkin, director of the Institute of Religion and Law, a Moscow research group, described it as “part of a broad policy of suppressing all nongovernmental organizations” that has gained particular force because of the highly centralized structure of Jehovah’s Witnesses under a worldwide leadership based in the United States.

“They are controlled from outside Russia and this is very suspicious for our secret services,” he said. “They don’t like having an organization that they do not and cannot control.”

Artyom Grigoryan, a former Jehovah’s Witness who used to work at the group’s Russian headquarters but who now follows the Orthodox Church, said the organization had “many positive elements,” like its ban on excessive drinking, smoking and other unhealthy habits.

All the same, he said it deserved to be treated with suspicion. “Look at it from the view of the state,” he said. “Here is an organization that is run from America, that gets financing from abroad, and whose members don’t serve in the army and don’t vote.”

Estranged from his parents, who are still members and view his departure as sinful, he said Jehovah’s Witnesses broke up families and “in the logic of the state, it presents a threat.”

He added, “I am not saying this is real or not, but it needs to be checked by objective experts.”

Mr. Sivak, now preparing for yet another trial, said he had always tried to follow the law and he respected the state, but could not put its interests above the commands of his faith.

“They say I am a terrorist,” he said, “but all I ever wanted to do was to get people to pay attention to the Bible.”

Wednesday, March 29, 2017

A Tale of Two Bell Curves

Bo Winegard and Ben Winegard Quillette Online Magazine


“The great enemy of the truth is very often not the lie, deliberate, contrived and dishonest, but the myth, persistent, persuasive and unrealistic” ~ John F. Kennedy 1962

To paraphrase Mark Twain, an infamous book is one that people castigate but do not read. Perhaps no modern work better fits this description than The Bell Curve by political scientist Charles Murray and the late psychologist Richard J. Herrnstein. Published in 1994, the book is a sprawling (872 pages) but surprisingly entertaining analysis of the increasing importance of cognitive ability in the United States. It also included two chapters that addressed well-known racial differences in IQ scores (chapters 13-14). After a few cautious and thoughtful reviews, the book was excoriated by academics and popular science writers alike. A kind of grotesque mythology grew around it. It was depicted as a tome of racial antipathy; a thinly veiled expression of its authors’ bigotry; an epic scientific fraud, full of slipshod scholarship and outright lies. As hostile reviews piled up, the real Bell Curve, a sober and judiciously argued book, was eclipsed by a fictitious alternative. This fictitious Bell Curve still inspires enmity; and its surviving co-author is still caricatured as a racist, a classist, an elitist, and a white nationalist.

Myths have consequences. At Middlebury college, a crowd of disgruntled students, inspired by the fictitious Bell Curve — it is doubtful that many had bothered to read the actual book — interrupted Charles Murray’s March 2nd speech with chants of “hey, hey, ho, ho, Charles Murray has got to go,” and “racist, sexist, anti-gay, Charles Murray go away!” After Murray and moderator Allison Stanger were moved to a “secret location” to finish their conversation, protesters began to grab at Murray, who was shielded by Stanger. Stanger suffered a concussion and neck injuries that required hospital treatment.

It is easy to dismiss this outburst as an ill-informed spasm of overzealous college students, but their ignorance of The Bell Curve and its author is widely shared among social scientists, journalists, and the intelligentsia more broadly. Even media outlets that later lamented the Middlebury debacle had published – and continue to publish – opinion pieces that promoted the fictitious Bell Curve, a pseudoscientific manifesto of bigotry. In a fairly typical but exceptionally reckless 1994 review, Bob Hebert asserted, “Murray can protest all he wants, his book is just a genteel way of calling somebody a n*gger.” And Peter Beinart, in a defense of free speech published after the Middlebury incident, wrote, “critics called Murray’s argument intellectually shoddy, racist, and dangerous, and I agree.”

The Bell Curve and its authors have been unfairly maligned for over twenty years. And many journalists and academics have penned intellectually embarrassing and indefensible reviews and opinions of them without actually opening the first few pages of the book they claim to loathe. The truth, surprising as it may seem today, is this: The Bell Curve is not pseudoscience. Most of its contentions are, in fact, perfectly mainstream and accepted by most relevant experts. And those that are not are quite reasonable, even if they ultimately prove incorrect. In what follows, we will defend three of the most prominent and controversial claims made in The Bell Curve and note that the most controversial of all its assertions, namely that there are genetically caused race differences in intelligence, is a perfectly plausible hypothesis that is held by many experts in the field. Even if wrong, Herrnstein and Murray were responsible and cautious in their discussion of race differences, and certainly did not deserve the obloquy they received.

Claim 1: There is a g factor of cognitive ability on which individuals differ.  

First discovered in 1904 by Charles Spearman, an English psychologist, the g factor is a construct that refers to a general cognitive ability that influences performance on a wide variety of intellectual tasks. Spearman noted that, contrary to some popular myths, a child’s school performance across many apparently unrelated subjects was strongly correlated. A child who performed well in mathematics, for example, was more likely to perform well in classics or French than a child who performed poorly in mathematics.

He reasoned there was likely an underlying cognitive capacity that affected performance in each of these disparate scholastic domains. Perhaps a useful comparison can be made between the g factor and an athletic factor. Suppose, as seems quite likely, that people who can run faster than others are also likely to be able to jump higher and further, throw faster and harder, and lift more weight than others, then there would be a general athletic factor, or a single construct that explains some of the overall variance in athletic performance in a population. This is all g is: A single factor that explains some of the variance in cognitive ability in the population. If you know that Sally excels at mathematics, then you can reasonably hypothesize that she is better than the average human at English. And if you know that Bob has an expansive vocabulary, then you can reasonably conclude that he is better than an average human at mathematics.

Despite abstruse debates about the structure of intelligence, most relevant experts now agree that there is indeed a g factor. Sociologist and intelligence expert, Linda Gottfredson, for example, wrote that: “The general factor explains most differences among individuals in performance on diverse mental tests. This is true regardless of what specific ability a test is meant to assess [and] regardless of the test’s manifest content (whether words, numbers or figures)…” Earl Hunt, in his widely praised textbook on intelligence, noted that, “The facts indicate that a theory of intelligence has to include something like g.” (pg. 109). And Arthur Jensen, in his definitive book on the subject, wrote that, “At the level of psychometrics, g may be thought of as the distillate of the common source of individual variance…g can be roughly likened to a computer’s central processing unit.” (Pg. 74).

Claim 2: Intelligence is heritable.

Roughly speaking, heritability estimates how much differences in people’s genes account for differences in people’s traits. It is important to note that heritability is not a synonym for inheritable. That is, some traits that are inherited, say having five fingers, are not heritable because underlying genetic differences do not account for the number of fingers a person has. Possessing five fingers is a pan-human trait. Furthermore, heritability is not a measure of malleability. Some traits that are very heritable are quite responsive to environmental inputs (height, for example, which has increased significantly since the 1700s, but is highly heritable).

Most research suggests that intelligence is quite heritable, with estimates from 0.4-0.8, meaning that roughly 40 to 80 percent of the variance in intelligence can be explained by differences in genes. The heritability of intelligence increases across childhood and peaks during middle adulthood.

At this point, the data, from a variety of sources including adoptive twin studies and simple parent-offspring correlations, are overwhelming and the significant heritability of intelligence is no longer a matter of dispute among experts. For example, Earl Hunt contended, “The facts are incontrovertible. Human intelligence is heavily influenced by genes.” (Pg. 254). Robert Plomin, a prominent behavioral geneticist, asserted that, “The case for substantial genetic influence on g is stronger than for any other human characteristic.” (Pg. 108). And even N. J. Mackintosh, who was generally more skeptical about g and genetic influences on intelligence, concluded, “The broad issue is surely settled [about the source of variation in intelligence]: both nature and nurture, in Galton’s phrase, are important.” (Pg. 254).

Claim 3: Intelligence predicts important real world outcomes.

It would probably surprise many people who criticize the fictitious Bell Curve that most of the book covers the reality of g and the real world consequences of individual differences in g, including the emergence of a new cognitive elite (and not race differences in intelligence). Herrnstein and Murray were certainly not the first to note that intelligence strongly predicts a variety of social outcomes, and today their contention is hardly disputable. The only matter for debate is how strongly intelligence predicts such outcomes. In The Bell Curve, Herrnstein and Murray analyzed a representative longitudinal data set from the United States and found that intelligence strongly predicted many socially desirable and undesirable outcomes including educational attainment (positively), socioeconomic status (positively), likelihood of divorce (negatively), likelihood of welfare dependence (negatively) and likelihood of incarceration (negatively).

Since the publication of The Bell Curve, the evidence supporting the assertion that intelligence is a strong predictor of many social outcomes has grown substantially. Tarmo Strenze, in a meta-analysis (a study that collects and combines all available studies on the subject), found a reasonably strong relation between intelligence and educational attainment (0.53), intelligence and occupational prestige (0.45), and intelligence and income (0.23). In that paper, he noted that “…the existence of an overall positive correlation between intelligence and socioeconomic success is beyond doubt.” (Pg. 402). In a review on the relation between IQ and job performance, Frank Schmidt and John Hunter found a strong relation of .51, a relation which increases as job complexity increases. In a different paper, Schmidt candidly noted that “There comes a time when you just have to come out of denial and objectively accept the evidence [that intelligence is related to job performance].” (pg. 208). The story is much the same for crime, divorce, and poverty. Each year, more data accumulate demonstrating the predictive validity of general intelligence in everyday life.

Claim 4a: There are race differences in intelligence, with East Asians scoring roughly 103 on IQ tests, Whites scoring 100, and Blacks scoring 85.

Of course, most of the controversy The Bell Curve attracted centered on its arguments about race differences in intelligence. Herrnstein and Murray asserted two general things about race differences in cognitive ability: (1) there are differences, and the difference between Blacks and Whites in the United States is quite large; and (2) it is likely that some of this difference is caused by genetics. The first claim is not even remotely controversial as a scientific matter. Intelligence tests revealed large disparities between Blacks and Whites early in the twentieth century, and they continue to show such differences. Most tests that measure intelligence (GRE, SAT, WAIS, et cetera) evince roughly a standard deviation difference between Blacks and Whites, which translates to 15 IQ points. Although scholars continue to debate whether this gap has shrunk, grown, or stayed relatively the same across the twentieth century, they do not debate the existence of the gap itself.

Here are what some mainstream experts have written about the Black-White intelligence gap in standard textbooks:

“It should be acknowledged, then, without further ado that there is a difference in average IQ between blacks and whites in the USA and Britain.” (Mackintosh, p. 334).

“There is a 1-standard deviation [15 points] difference in IQ between the black and white population of the U.S. The black population of the U.S. scores 1 standard deviation lower than the white population on various tests of intelligence.” (Brody, p. 280).

“There is some variation in the results, but not a great deal. The African American means [on intelligence tests] are about 1 standard deviation unit…below the White means…” (Hunt, p. 411).

Claim 4b: It is likely that some of the intelligence differences among races are caused by genetics.

This was the most controversial argument of The Bell Curve, but before addressing it, it is worth noting how cautious Hernstein and Murray were when forwarding this hypothesis: “It seems highly likely to us that both genes and environment have something to do with racial differences. What might that mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate.” (p. 311). This is far from the strident tone one would expect from reading secondhand accounts of The Bell Curve!

There are two issues to address here. The first is how plausible is the hereditarian hypothesis (the hypothesis that genes play a causal role in racial differences in intelligence); and the second is should responsible researchers be allowed to forward reasonable, but potentially inflammatory hypotheses if they might later turn out false.

Although one would not believe it from reading most mainstream articles on the topic (with the exception of William Saletan’s piece at Slate), the proposal that some intelligence differences among races are genetically caused is quite plausible. It is not our goal, here, to cover this debate exhaustively. Rather, we simply want to note that the hereditarian hypothesis is reasonable and coheres with a parsimonious view of the evolution of human populations . Whether or not it is correct is another question.

Scholars who support the hereditarian hypothesis have marshalled an impressive array of evidence to defend it. Perhaps the strongest evidence is simply that there are, as yet, no good alternative explanations.

Upon first encountering evidence of an IQ gap between Blacks and Whites, many immediately point to socioeconomic disparities. But researchers have long known that socioeconomic status cannot explain all of the intelligence gap. Even if researchers control for SES, the intelligence gap is only shrunk by roughly 30% (estimates vary based on the dataset used, but almost none of the datasets finds that SES accounts for the entire gap). This is excessively charitable, as well, because intelligence also causes differences in socioeconomic status, so when researchers “control for SES,” they automatically shrink some of the gap.

Another argument that is often forwarded is that intelligence tests are culturally biased—they are designed in such a way that Black intelligence is underestimated. Although it would be rash to contend that bias plays absolutely no role in race differences in intelligence, it is pretty clear that it does not play a large role: standardized IQ and high stakes tests predict outcomes equally well for all native-born people. As Earl Hunt argued in his textbook, “If cultural unfairness were a major cause of racial/ethnic differences in test performance, we would not have as much trouble detecting it as seems to be the case.” (p. 425).

Of course, there are other possible explanations of the Black-White gap, such as parenting styles, stereotype threat, and a legacy of slavery/discrimination among others. However, to date, none of these putative causal variables has been shown to have a significant effect on the IQ gap, and no researcher has yet made a compelling case that environmental variables can explain the gap. This is certainly not for lack of effort; for good reason, scholars are highly motivated to ascertain possible environmental causes of the gap and have tried for many years to do just that.

For these reasons, and many more, in a 1980s survey, most scholars with expertise rejected the environment-only interpretation of the racial IQ gap, and a plurality (45%) accepted some variant of the hereditarian hypothesis. Although data are hard to obtain today, this seems to remain true. In a recent survey with 228 participants (all relevant experts), most scholars continued to reject the environment-only interpretation (supported by 17%), and a majority believed that at least 50% of the gap was genetically caused (52%). Many scholars in the field have noted that there is a bizarre and unhealthy difference between publicly and privately expressed views. Publicly, most experts remain silent and allow vocal hereditarian skeptics to monopolize the press; privately, most concede that the hereditarian hypothesis is quite plausible. Here, we’ll leave the last word to the always judicious Earl Hunt: “Plausible cases can be made for both genetic and environmental contributions to [racial differences in] intelligence…Denials or overly precise statements on either the pro-genetic or pro-environmental side do not move the debate forward. They generate heat rather than light.” (p. 436).

Whatever the truth about the cause of racial differences in intelligence, it is not irresponsible to forward reasonable, cautiously worded, and testable hypotheses. Science progresses by rigorously testing hypotheses, and it is antithetical to the spirit of science to disregard and in fact rule out of bounds an entirely reasonable category of explanations (those that posit some genetic causation in intelligence differences among racial groups). The Bell Curve is not unique for forwarding such hypotheses; it is unique because it did so publicly. Academics and media pundits quickly made Murray an effigy and relentlessly flogged him as a warning to others: If you go public with what you know, you too will suffer this fate.

There are two versions of The Bell Curve. The first is a disgusting and bigoted fraud. The second is a judicious but provocative look at intelligence and its increasing importance in the United States. The first is a fiction. And the second is the real Bell Curve. Because many, if not most, of the pundits who assailed The Bell Curve did not and have not bothered to read it, the fictitious Bell Curve has thrived and continues to inspire furious denunciations. We have suggested that almost all of the proposals of The Bell Curve are plausible. Of course, it is possible that some are incorrect. But we will only know which ones if people responsibly engage the real Bell Curve instead of castigating a caricature.



Bo Winegard is a graduate student at Florida State University. Follow him on Twitter: @EPoe187

Ben Winegard is an assistant professor at Carroll College. Follow him on Twitter: @BenWinegard

Wednesday, March 15, 2017

Metformin

The Unauthorized Biography


A month’s supply costs you about the same as
a Starbucks latte. It’s one of the oldest drugs in active clinical use today, and it’s now the first-line drug for almost everyone with newly diagnosed Type 2 diabetes on the planet. Most of the millions of people who take it don’t give it a second thought, but humble metformin may well be the closest thing we have to a miracle drug.

Consider the following: Other than insulin, metformin packs probably the biggest blood-glucose-lowering punch of any diabetes drug on the market, lowering HbA1c levels (a measure of blood glucose control) by up to 1.5%. It protects your heart, and it might even hold some cancers at bay. It gets along well with a wide variety of other drugs and treatments, and by most measures, it’s safer than most other prescription drugs. Impressively, it’s risen from the ranks of a “me-too” drug (a drug that’s very similar to an existing drug) to the very pinnacle of diabetes treatment worldwide.

The birth of metformin

When metformin was born to Dr. Jean Sterne at Aron Laboratories in Paris, France, in 1959, its proud father had no way to foresee how it would change the world. Initially (and still) sold under the trade name Glucophage, Greek for sugar eater, it would grow up to be a superstar, the most prescribed diabetes drug on the planet.

Like most drugs, metformin has its roots in a plant — in this case, the French lilac (Galega officinalis). Research into this plant’s potential as an antidiabetic agent dates back to the early 1920s, but major efforts were abandoned with the discovery and development of insulin. It wasn’t until 30 years later, in the search for oral drugs to control diabetes, that these efforts resumed. While the French lilac has long been known to have glucose-lowering properties, it has also long been known to be poisonous. Because it is dangerous to livestock, here in the United States it’s listed as a noxious weed in 12 states, including pretty much every state it grows in.

And just how does metformin lower blood glucose? No one knows, despite the fact that it is one of the most studied compounds in the world, having been the subject of over 13,000 clinical researchers and more than 5,600 published studies over the last 60 years. The leading theories on metformin hold that it limits glucose production in the liver, or that it helps muscle tissue take in glucose. Or that it helps with carbohydrate absorption. Or that it’s a mild insulin sensitizer. It’s probably a combination of all of these factors, although this is far from a definite answer.

But metformin does work, and it works fast, nearly from the first pill. It also carries little risk of overdoing its job; when used alone as a treatment, metformin rarely causes hypoglycemia (low blood glucose). It does not cause weight gain, and in many people it causes mild weight loss. It reduces the risk of heart attack, can be combined with other blood-glucose-lowering drugs, and has few harmful side effects. (Click here to learn more about the side effects of metformin.) Yet in the beginning, metformin was nowhere near as beloved as it is today.

Early setbacks

Metformin has been in clinical use now for over 50 years, a stellar run that’s bested only by aspirin. (Insulin, as a category, is closing in on the 100-year mark, but no one formulation of insulin comes even close to metformin’s Golden Jubilee.) But it didn’t have an easy childhood.

Fears of lactic acidosis, a wickedly dangerous side effect from some members of the biguanide family of medicines, of which metformin is a part, delayed its approval by the Food and Drug Administration (FDA) in the United States until 1995 — fully 38 years after the drug’s deployment in Europe. Lactic acidosis is a metabolic crisis in which the blood becomes acidic. It frightens doctors and patients alike because of its reputation as a one-way street, with an overall mortality rate above 75% and a median survival time of only 28 hours.

But what’s the risk of lactic acidosis from metformin, really? Cartoon character SpongeBob SquarePants may have said it best in the famous episode on sea bears: “Sea bears are no laughing matter. Why once I met this guy, who knew this guy, who knew this guy, who knew this guy, who knew this guy, who knew this guy, who knew this guy, who knew this guy, who knew this guy, who knew this guy’s cousin…” Like sea bears, very few doctors have actually seen a case of metformin-induced lactic acidosis with their own eyes. But these rumors and hearsay kept prescriptions low during metformin’s early years in the United States, despite its already long clinical career in Europe.

But now, after 50 years in the trenches, we know just how safe metformin really is. At the very worst, the rate of lactic acidosis associated with metformin is 3 cases per 100,000 patient-years. And on those exceptionally rare occasions when lactic acidosis is seen in metformin users, the fatality rate appears to be much lower than is usually seen when other drugs or conditions cause lactic acidosis. By comparison, the arthritis medicine celecoxib (Celebrex) carries an associated all-cause mortality rate of 1,140 per 100,000 patient-years.
But does metformin really cause lactic acidosis at all? One recent study, first published in the Cochrane Database, looked at pooled data from 347 recent clinical studies. In all of these clinical studies, there were no cases of lactic acidosis among participants who were assigned to take metformin. The new study also points out that people with diabetes are more prone to lactic acidosis than the general population in the first place. Other studies have shown that rates of lactic acidosis in non-metformin arms of clinical studies are actually higher than in metformin arms, seriously calling into question the conventional wisdom that metformin causes lactic acidosis.

Why, then, has this fear been so widespread? Metformin actually wasn’t the first member of the biguanide family of drugs to hit the market. It was preceded by buformin and by phenformin, which is now banned nearly everywhere. In contrast to metformin’s theoretical lactic acidosis rate of 3 cases per 100,000 patient-years, phenformin had a rate more than 20 times higher. It was pulled from the market following a number of high-profile deaths in France in the 1970s.

By the way, even if metformin does cause lactic acidosis, it’s not the only cheap pill to do so. Lactic acidosis is also associated with overdoses of acetaminophen, more commonly known by its brand name, Tylenol.

Metformin makes it big

Once a small-town kid suspected of mischief, metformin is now embraced by the International Diabetes Federation, the American Diabetes Association (ADA), and the European Association for the Study of Diabetes as the first-line drug for Type 2 diabetes. In fact, a few years ago the ADA dropped its long-standing recommendation to start Type 2 diabetes treatment with just diet and exercise. Now the group recommends diet, exercise, and, when necessary, metformin.

So how did metformin achieve this career transformation? It wasn’t until the UK Prospective Diabetes Study (UKPDS) was released in 1998 that the floodgates of acceptance opened. The UKPDS gave American doctors solid clinical evidence of metformin’s effectiveness at both lowering blood glucose and improving cardiovascular outcomes, just at the time when medicine in the United States was moving to a more evidence-based framework. Metformin began to pick up speed, and it hasn’t really hit any stumbling blocks since then.

According to the IMS Health National Prescription Audit, roughly 80 million prescriptions for metformin were dispensed in the United States alone in 2015. If you’re wondering how 29.1 million people with diagnosed diabetes can use nearly three times as many prescriptions, it’s because each time someone refills a 30- or 90-day supply of metformin, it counts as one prescription. Still, that’s a lot of metformin. Even at a measly four bucks a pop, the drug grosses nearly half a billion dollars per year in the United States alone. And globally, metformin is the most widely prescribed diabetes drug. Its father, Dr. Sterne, would be proud.

Metformin gets married

Metformin works well with other medicines, giving rise to precious few drug interactions. More than that, combinations of metformin and other glucose-lowering drugs have been shown to be significantly more potent than either medicine alone — and sometimes even more potent than the sum of each drug’s individual effect. Since getting people to take multiple prescription drugs can be a challenge, metformin has been married to a number of other diabetes medicines to create “polypills,” capsules or tablets with more than one drug in them.

Metformin has been combined in diabetes polypills with sulfonylureas (in Metaglip, Glucovance, Amaryl M), thiazolidinediones (ACTOplus met), DPP-4 inhibitors (Janumet, Galvumet, Kombiglyze XR), and SGLT2 inhibitors (Invokamet, Xigduo XR, Synjardy). Globally, there are now more than 20 polypills containing metformin, and the list is likely to continue to grow as new diabetes drugs are developed.

Metformin’s descendents

Although garden-variety metformin hasn’t really changed in 50 years, several new formulations have been introduced since that time. For people who have a hard time swallowing pills, metformin comes in a liquid formulation called Riomet. The most popular variant, however, is an extended-release version of the drug. Metformin is only absorbed within the body at the very upper part of the gastrointestinal tract, and any portion of the drug that passes further “downstream” is simply excreted. The trick to extending the action of the drug, then, is to keep the pill in the stomach longer while releasing the medicine slowly.

The most commonly prescribed extended-release version of metformin is Glucophage XR. This pill accomplishes its mission with a polymer that turns into a gel in the stomach, which blocks quick absorption of the medicine. This XR formulation has been shown to prolong the absorption of the drug to a peak of around seven hours, compared with traditional metformin’s three-hour peak in working action.

Indian researchers have also experimented with a floating pill that would stay in the stomach for even longer, slowly releasing metformin the entire time. For as long as metformin remains a popular diabetes drug, it is a safe bet that researchers will be trying to create new and innovative ways to deliver it to the body.

A promising future

As of today, metformin is FDA-approved only for use in Type 2 diabetes as a blood-glucose-lowering agent. However, it is increasingly used off-label by people with Type 1 diabetes to reduce insulin requirements, which it most likely achieves through its insulin-sensitizing effects. Some doctors also prescribe it to people with Type 1 diabetes who are overweight, to counteract the weight gain from insulin that some people experience.

Beyond diabetes, metformin is an effective treatment for polycystic ovary syndrome (PCOS), for which it can increase ovulation rates fourfold. While it is not FDA-approved for this use, metformin features prominently in the treatment guidelines for PCOS of many organizations worldwide, including the American College of Obstetricians and Gynecologists. In the area of HIV/AIDS treatment, metformin is sometimes used to reduce cardiovascular risk factors. And far on the cutting edge of medical research, metformin is being evaluated for its potential to reduce the growth of tumors.

Closer to its original home, metformin is increasingly used in efforts to prevent Type 2 diabetes, or at least to delay the onset of full-blown diabetes in people with prediabetes. Although metformin is not FDA-approved for prediabetes, more and more doctors prescribe it to keep blood glucose levels in the normal range for as long as possible.

After more than 50 years, it is safe to say that metformin is still going strong.