Wednesday, July 12, 2017

Automation Against Capitalism

Automation is Capitalism's great new prize and its most potent challenge. At once, it breaks the back of organised labour but puts into disarray the carefully constructed social system that we call Capitalism. It is Capital - that's what machines, robots and know-how are - becoming supremely productive and utterly meaningless at the same time. It is the realisation of an utopia, but also a moment of reality. It would potentially expand supply infinitely, as finite Human time will no longer be required, at the same time as perversely limiting demand, as nothing that is produced could be bought.

The last bit is indeed the classic Marxist argument, but from the vantage point of 21st century, we see something that Marx did not. First, though Marx made some very insightful predictions, the empire was still only taking shape and at the time of Marx's death, the integration of global economy was still in its infancy. Also, for Marx, the nineteenth century capitalism was a relentless pursuit of efficiency. It was about converting every scrap of human life into productive work, with just as much rewards for workers as is needed to preserve the scrap of human life they were allowed to live; the rest went to the owner of Capital. And, finally, Capital in Marx's world was a finite commodity: Remember this was the time of Gold Standard!

These are the three things we know now. First, Capitalism has defied Marx's prediction of imminent demise by progressively expanding into the farthest reaches of the globe and bringing new consumers in its fold who would slave away their time to get a piece, if only a crumb, of the cake. Further, Marx did not see Capitalism's unique tendency to create meaningless jobs - jobs which has no other productive use other than hooking people to the elaborate system of signs and desires - which gave it a 'viral' character. The system did not generate surplus through squeezing out productive efficiency; it rather created surplus by creating useless demand. This indeed wouldn't have been possible in Marx's world of sound money, but it was long gone as the Gold Standard, and its successor, Gold Exchange Standard, were conveniently binned, and an intricate but dubious system of fictitious capital was constructed to monetise the future. 

Capitalism survived and well. It advanced breaking down traditional communities, ways of living and methods of transaction; it encompassed the globe and monetised every living moment. It created layers and layers of useless jobs: Jobs like reality TV stars, models and celebrities, whose job is to create allure and keep us hooked; all those Consultants, who recycle received wisdom and specialise in making slide decks; all those myriad middlemen and sales people, who sell fictitious financial products of dubious value to each other, so on and so forth. And, all this was paid for with credit, created out of thin air by modern financial system, predicated on people slaving away their future time in the pursuit of more. 

And, what if they don't? We may be at that moment when the delusion of Capitalist sign-making reached its pinnacle and self-deluded itself; fooled itself in the business of making fools; signs became so all encompassing that the reality has been erased. As machine step into the workplace and take away jobs, it is not only that the semblance of shared prosperity vanishes, but it takes away the possibility of all those labour time on whose basis the credit was built. Among all the slickness of Robot-produced future, the debts that built Capitalism have to be reset, as there would be no-one, or at least not enough people, to pay for it.

The Robotic future is therefore as calamitous to Labour power as to the current form of Capitalism. It is no straight road to Marx's "Hunt in the morning, Philosophise in the evening" utopia (a passage which he took out after he wrote it, apparently out of embarrassment) but rather a scary challenge to all those plotting the future: If Robots do all the work, who pays for the Debt? And, if we are to reset all credit, can Robots be at all there?

Indeed, one knows the answer: We will find a way - we always find a way! And, indeed, we will. But all things that have a beginning will have an end. We are perhaps living in end times when our ability to exploit the frontier and mine the future to create a system of illusory jobs and fictitious capital comes to a close. Surprisingly a system's greatest triumph also looks like its end; that is usually how History plays out.

  

 


Incubators and Universities: Need For A New Model

As the crisis in jobs becomes apparent, many think that the way to maintain the Middle Class society is to be found in entrepreneurship. In their mind, it is a straightforward transition: People not finding jobs would start businesses. In some quarters, those look for jobs are already maligned - 'Job Takers' they are called - as opposed to those committing themselves to entrepreneurial journey, the 'Job Creators'. As always, the reality is harsher than the theory. But my point is not to challenge the idea that there should be more entrepreneurs. It is how to get there I have questions about.

More specifically, my doubts are about the new trend of creating university-based incubators, US style, in the universities in developing countries. The incubators are taking the place of 'Placement Offices' or what was euphemistically called the 'Industry Collaboration Office', becoming the last mile of the students' life cycle in an university or a business school. 

The idea behind these incubators are to replicate the successes of the incubators in the top universities of the world. They are inspired by the stories coming out of the likes of Stanford and MIT. The governments are excited about it too, and treat the incubators as solutions to the jobs crisis they have in their hand. However, the trouble is, the universities in the developing world, particularly those in ex-colonies, are very different institutions than the American ones, and they are hardly designed to be hotbed of innovation.

It is a mistake to see all universities as same, when the Colonial University was set up with the very purpose of standardisation and connecting colonial education to colonial employment. Indeed, the countries are now free, but most of them maintained their colonial institutions and see modernity in continuity of the traditions bestowed upon them by the Colonial administrations. This was specifically the intention of the British administrators, who appreciated the value of soft power long before the term was coined. And, among the institutions of Colonial age, the universities were the most revered, seen as gifts of science and reason, an intimate ally of the modernising politicians who took over the running of the countries after the Colonialists left. 

The universities, therefore, are factories to create servants of the state. The whole university culture, with the possible exception of some elite technocratic institutions set up post-independence in some of the countries, is usually deeply rooted in the desire to maintain the bureaucratic continuity, rather than disrupt and innovate. Their students come looking for a qualification that will lead to a job, and their aspirations are more narrowly defined than that of their counterparts in metropolitan nations. The idea of the university as a fountainhead of innovation, therefore, stands on a false premise.

In a way, university-based incubators work against the grain of the host societies, where the innovation mostly happen outside the universities. It also imposes assumptions which are alien and unworkable, like a bias towards younger entrepreneurs, though family support structures are different in many developing countries and people starting enterprise in relatively later stage of life are far more common. Indeed, the investors sometime work with the assumptions they learn from American business schools, and override the considerations of local labour market and society. However, this is part of the problem rather than a justification of a wrongly designed system.

In my mind, there are two things that need to happen. One, and this is close to my heart, is to create Enterprise Schools, which are built upon the culture of entrepreneurship, which will attract a specific kind of people and support them through a longer development cycle. Two, and this is perhaps more scalable, while the incubators may be university based - if purely because the lower real estate costs - they should mandatorily create mixed cohorts, drawing from the outside population, particularly including people who already have work experience.

In summary, my recommendation is that the incubation model needs to be reinvented for the developing countries, rather than the plug-and-play approach that is now prevalent. This needs a conversation, and not blind faith. Enterprise is not a straightforward solution for the jobs problem, as these require changing markets, newer opportunities and upsetting existing corporate primacy, and this, before everything else, needs opening of minds and engaging at a different level.



Tuesday, July 11, 2017

What Does A Tech-Mahindra Phone Call Say About Indian IT Industry



Last week, voice recording of an HR executive firing an employee at Tech Mahindra, a big Indian IT company, went viral (as above). The employee was told that he is being fired not because of any performance issues, but because of 'cost optimisation'. He was told to resign by the end of the day, failing which he would be terminated the next day, and lose all his exit benefits and wouldn't even get a reference. When the employee pleaded it was too short a notice, he was told that the company can fire him summarily. When he sought an option to appeal, he told there was none.

After this went viral, many weighed in, converging on the consensus that while the company might have the rights to fire the employee, it was all too harsh. As for me, I thought it was coercive, and therefore, illegal: I can't see how a company can fire an employee on disciplinary grounds because he failed to resign as told. In America, this, aggregating the claims of all employees fired in this manner, would have made a multi-million dollar class action lawsuit.

Anand Mahindra, the Chairman of the company and a business leader who maintains an enlightened image, was quick to issue an apology on Twitter. His other Senior colleagues followed, in a damage control exercise. It is not known whether anyone has actually been disciplined or fired for this stupidity.

The essence of these apologies was that the manner of this firing was harsh, which undeniably it was. However, the commentary that followed accepted these firings as inevitable. The narrative coming out of Indian IT companies is that they have been caught out by 'convergence' of several factors - automation, productization, protectionism - and their business models are changing. They hope to become more nimble, move up the value chain and come up with innovative solutions. These firings, harsh as they may be, are steps towards that better, brighter future.

This narrative is of course going nowhere, as the call shows. Legalities aside, anyone listening into that call can't miss the contempt with which the employee was treated. At one point, he was told that he can't obviously appeal to the CEO (the question is, why not?). This is the layers of disdain that one sees on the Indian streets - the guys in the big cars treat the guys in small cars with contempt, who in turn treats the scooterwallah with contempt, who then treats the pedestrians with contempt, so on and so forth. Of course, Tech Mahindra can't become a magnet of world class talent tomorrow just by firing a few unfortunate employees at the bottom of the food chain. If spreadsheet savvy created great companies, world would have been a different place today. Clearly the company treats its employees like cattle and it is going nowhere with that culture.

Besides, the would-be super-innovator also seemed to have no idea of social media. Otherwise, why would it let lose an obviously untrained and emotionally-deficient HR Exec in a bullying match with its employees? Before they unleashed the best practices in firing that they may have learned from some American company they love to ape, why did they not realise that there is an entire cottage industry of 'how to fire people' in America? Well, the obvious answer is that they did not think about it. That should tell their customers how much they really understand about the world of social technologies.

The PR exercise that the Senior Execs are doing wouldn't save the company, as these will only obscure the broader issues of commitment and culture. Nothing changes in a big company unless the share prices plummet or the customers vote with their feet. The former will not happen because the spreadsheet boys will speak to spreadsheet boys and buy their theory of 'convergence', and miss the signs of decay. The latter will also not happen because the American customers were treating those Indian IT workers with funny accent as cattle in any case, and wouldn't care if a few thousands were fired. Until indeed, the whole edifice comes crushing down again.




The Eurasian Moment in World Politics

The world of politics is changing profoundly. It is not just about the rise of the strongmen rulers - President Xi of China, Prime Minister Abe of Japan, Prime Minister Modi of India or President Duterte of Philippines - or their perennially ubiquitous counterparts in Mr Putin, Mr Erdoğan, Mr Netanyahu and Mr Zuma. The shift that we are seeing is more than the shocks, such as Brexit or a Trump Presidency, or the ascendance of extreme nationalists like Marine Le Pen in France, Geert Wilders in Netherlands or Nobert Hoffer in Austria. The anti-Semitic rallies in Poland, the authoritarian Viktor Orbán in Hungary, the absurd Beppe Grillo in Italy and the abhorrent Golden Dawn in Greece are all part of a big shift, which is not just about the rise of nationalism and breakdown of the post-war institutions. There may be a more fundamental shift underway.

Discussion about such a shift is not new. This has been discussed in the scholarly circles for some time. But, since the last year, it has reached mainstream media, for good reasons. It does seem that the anticipation of such a shift is now central to strategic decision making in various large countries, including Russia, Germany, China and Turkey. And, after Trump's ascendance to Presidency, such a shift has become one of the key factors in strategic decision making even in the White House. 

I am referring to the shift of power from Atlantic Seaboard to Eurasian plain, something that the Nineteenth century British geo-strategists foresaw. That their vision did not come to pass is perhaps because of the rise of America as a global power in the dying years of the Nineteenth century, when the American industrial might and the American Military ability and willingness to engage changed everything else, followed by the Great War, Russian Revolution and subsequent dividing lines drawn through the world. Eurasia faded out of spotlight as a strategic theatre as Europe emerged.

Indeed, this was not just a twentieth century affair: Eurasia dominated world history ever since the decline of the Romans, but its relative decline started with the improvements in long haul shipping and the voyages of Columbus and Vasco Da Gama. But it was back in contention in the Nineteenth Century, with the Russian and the British empires jostling for influence, until the Americans entered the fray (after a deeply divisive national debate) and changed everything. For the next hundred years or so, American power, primarily represented by the overwhelming power of its carrier groups, dominated the world. The unfortunate Eurasian expeditions by the Russians in Afghanistan ended badly, and led to a breakdown of that empire. 

There are several reasons to think this may now change. The global nature of American power is not well supported by shared prosperity at home, and the domestic considerations may force a disengagement from wider global policing and in favour of limited and specific engagements required for 'national interest'. In many ways, this is a result of the over-reach of the Bush Years and the consistent Foreign Policy failures under Obama, when America's overseas engagements became costly and meaningless. The 'isolationism', if we call it that, was always a force in American politics, but George W Bush's adventurism and Obama's indecision has now undermined the case for 'interventionism' so much that the former makes sense to most Americans. 

This change that we see does not undermine the United States, as it controls the world's most powerful military and is the biggest economy. It, however, means its disengagement from Europe and greater engagement in Eurasia. It also means an economic revival of the Eurasian region, as President Xi builds infrastructure and brings manufacturing and trade to inner China. It also means a great human movement, as the Global Warming melts the Siberian Ice Cap and some of the great rivers running through South and South-East Asia starts faltering (and indeed, global warming may also mean some of the coastal cities can be completely lost). 

From the vantage point of the Trump administration, which wants to reduce global engagements and restructure the American economy and society, such a shift is only problematic if one has to cling to the dated geo-politics of the post-Cold War world. They, along with many other nations in the world, are adjusting to this new geopolitical reality. In a perverse way, Britain's shift - from Europe to the old Commonwealth - is also a pivot in this direction. Germany, with greater engagement with China's OBOR, is already signalling its understanding of this shift. 

I believe this shift is real, not just because of the geo-political logic but also because of the conscious actions of the countries and the leaders. There are countries which are blissfully oblivious - India seems to be one among them - while the others are scrambling as they see themselves losing out, such as Britain. We may be at a moment that comes once in many centuries, a turning of a long term trend visible only from the long-view vantage point. This would impact not just politics - though this may be where it starts - but business, economies and lives of people. 

Monday, July 10, 2017

Ideas and Ideology

Ideas are fascinating and exciting. We live in a culture that celebrates ideas. In a sense, we see all history as history of ideas now. It is ideas that make men great, and the great men are those who belabour with ideas, either to bring it into being or to create impact with it. Entrepreneurs, our modern Heroes, are the idea-warriors, who puts everything on stake to make their idea work. Ideas, in short, are divine inspirations, whose blessing we all seek and whose existence makes us meaningful.

But there is a dark side of ideas, which never gets talked about. All the monstrosities for the last two hundred years have been committed in the name of ideas. And, indeed, if one counts religion as an idea, the history will go back much further. Just as we transformed the Great Men doctrine into a narrative of great ideas, we should also perhaps replace our evil men doctrine with a narrative of bad ideas.

However, I anticipate an objection coming: Many ideas, which turned out to be pure evil, did not appear so at first. It takes a purely evil man, such as Hitler, to make an idea, such as Race Theories, really evil. And, thereon leads the usual Liberal vacuity: No ideas are inherently great or evil, it's what men make of it!

That is all nonsense. Ideas don't exist independent of men. We may make it sound like an object in itself, but ideas are really words and actions coming from people. They have no separate existence. And, besides, the concept that all great men are men of great ideas and yet, an idea needs evil men to become evil, is the have-your-cake-and-eat-it-too option.

It is time that we are having a reasoned debate about the downside of ideas. At every crisis point of history, this was quite obvious. For example, the Pragmatists in the United States, writing after the horrors of the Civil War (in which, Oliver Wendell Holmes fought), understood it perfectly: "Ideas should not become ideology", as John Dewey would later maintain. Stalin and Mao took the ideas of perfect society just too far. But these are only the well-known examples. Untold crimes have been committed in the British Empire, Commonwealth Countries, The United States and other parts of the world, in the name of ideas. The modern state, all-seeing and all-powerful, inflicted upon its people all kinds of forced behaviour, in the name of national interest and common good. Austerity, a recent idea, which argues that the state should live within its means though that does not apply to defence expenses or things like Monarchical maintenance, has also been taken to the extreme, but avoided scrutiny. When things have gone wrong, someone fell on his sword, but the idea lived on.

Why do I write about this now? Because ideas are seductive, and perfectibility of human beings is not monopolised by Dictators. These assumptions sit under every policy document, every technology business plan, every business school, every self-development formula, the claims of theory, science and technology. It touches our daily lives every moment, and most of our lives are lived within the matrix of options set up by ideas of perfectability and neat behaviour. And, this idea is not just a passive framework: This is actively, intrusively, ubiquitous. There are nations around the world - India among them - where the quest for creation of a pure people is real: The Republican Democratic constitution that the country was set up with, are being torn apart in the search of pure 'Indianness', just as the Japanese, the Chinese, the British, the Polish and the Hungarians set upon similar journeys. The ideology of ideas are all-encompassing and inescapably alluring.

While I argue against purity of ideas, the alternative, I am told, is relativism: If you don't believe in an idea, then you are a drifter, without roots, without a truth. But, this, again, is a fallacy of purity of idea, as if the Truth exists outside the human consciousness. As we build our world, it is best to acknowledge our role in it; to accept that life isn't perfect and our standards are, largely, defined by circumstances. Variability and malleability are the only truths of human existence. And, so it should be.

Therefore, it is sensible to keep Dewey's dictum in mind: Ideas should not become ideology. We are better as observers than as judges; flexibility is an inevitable aspect of human existence. We are beings in time, our consciousness is fragile, temporal and grounded: So is our knowledge. If a bigger truth exists, the best we could do is to be sceptical about it and search for it, but never, never, never should we pretend to have found it.




Thursday, July 06, 2017

Evolution of Meritocracy: American Eugenics, Intelligence Testing and The Making Of Modern Meritocracy

Introduction 

In the second decade of the new millennium - now - new questions about human abilities and human worth have arisen. A vast industry of computerisation and gradual rise of ‘machine intelligence’ challenged the prospect of ever-improving urban middle class life, replacing a vast number of secretarial, administrative and other ‘middle ability’ jobs with computer programmes, cheap workers overseas, and increasingly, with robots. Stagnated wages, disappearing jobs and breakdown of the ‘American Dream’ in its many global variants have led to a new ‘struggle for existence’ in the workplace.


This technological phenomenon also meant an inversion of the role of Capital and Labour in the production process. With decline of large factories and their unionised workforces in the West (replaced by large factories and their non-unionised labour in China and Indonesia) and with most people turned into keen consumers of latest gadgetry, collective bargaining has fallen out of popular favour, and a new hero, the billionaire entrepreneur, has captured people’s imagination. With political acquiescence, falling taxes accompanied rising corporate profits and returns on wealth has far surpassed the growth in wage income, leading to unprecedented and ever-increasing levels of inequality. The rationale of this ‘winner takes all’ society is underpinned by an worship of the ‘smart’, an ethic of outsized reward for intellectually gifted individuals sorted through a selective system of education and economic competition.

Michael Young, the British Socialist Education thinker, created an odd term - ‘meritocracy’, mixing the Latin and Greek roots of the same word - to paint a deliberate dystopia set in 2034. Meant as a critique of the British Education Act of 1944, which attempted to sort the British Society into selective Grammar Schools and non-selective Secondary Moderns on the basis of aptitude tested through aptitude tests for 11-year olds nationally, Young’s ‘Rise of Meritocracy’ ends badly with a revolt of ‘the populists’. Written in the embarrassing shadow of Nazi eugenics, which seemed to put the conflation of human intelligence and human worth on the wrong side of public opinion, Young’s dystopia was never supposed to come to pass. Yet, sixty years hence, ‘meritocracy’ has become one of the key organising principles of society, particularly in America.

The Atlantic Monthly reports that ‘American society increasingly mistakes intelligence for human worth’, pointing out :

As recently as the 1950s, possessing only middling intelligence was not likely to severely limit your life’s trajectory. IQ wasn’t a big factor in whom you married, where you lived, or what others thought of you. The qualifications for a good job, whether on an assembly line or behind a desk, mostly revolved around integrity, work ethic, and a knack for getting along—bosses didn’t routinely expect college degrees, much less ask to see SAT scores. As one account of the era put it, hiring decisions were “based on a candidate having a critical skill or two and on soft factors such as eagerness, appearance, family background, and physical characteristics.”

The 2010s, in contrast, are a terrible time to not be brainy. Those who consider themselves bright openly mock others for being less so. Even in this age of rampant concern over microaggressions and victimization, we maintain open season on the nonsmart. People who’d swerve off a cliff rather than use a pejorative for race, religion, physical appearance, or disability are all too happy to drop the s‑bomb: Indeed, degrading others for being “stupid” has become nearly automatic in all forms of disagreement.

The same article mentions ‘Darwin Awards’, which originated from an Usenet newsgroup in 1985, to ‘commemorate those who improve our gene pool by removing themselves from it’, and are ‘conferred’ on individuals who died while attempting something ‘stupid’, such as trying to climb out of one’s bedroom by the Ethernet cable. The excesses of Nazi racial policies might have been crucial in establishing the moral element of the Allied victory in the Second World War, but the modern fetish with ‘smart’ seems to make this apparently cruel award appear humourous. However, the invocation of Darwin and genetic science, an idiosyncratic and harmless twist of popular culture, is symptomatic of the ‘episteme’, referencing science to make the meaningless respectable.    

This essay is an attempt at a genealogical presentation of the modern idea of ‘meritocracy’, particularly in America, and an institutional innovation that underpin its love for ‘smart’ : The SAT (originally Scholastic Aptitude Test).  The science of intelligence is hotly contested and has gone through several cycles of claims and debunking, but SAT represents an institutional innovation that continued to exist and advance regardless of the state of scientific knowledge, while at the same time deriving legitimacy through scientific methods and practices. Created at the bidding of powerful men and institutions, it drew on the heritage of IQ Testing, and yet, it successfully relabeled itself when IQ testing fell out of favour. With the success of American commerce and American technological innovation, SAT has become a global shorthand for Meritocracy, enabling the rise of a testing-and-education industrial complex and spreading the ethic of ‘meritocracy’ globally.

The SAT is clearly an institution far more revered and consequential than the Darwin Awards, but it is essentially built on the same three elements: The idea of ‘aptitude’ (a shorthand for ‘intelligence’ invented, as would be discussed later, for the sake of public opinion)  as a biological attribute of the individual, the popular understanding of Darwinian ‘Natural Selection’ and legitimation through science and scientific methods. This essay will present the narrative of SAT in context, from its origins in Eugenics and the modern science of ‘intelligence’ testing to the current positioning of SAT as an universal shorthand of ‘merit’, exploring how an Eugenicist artefact, based on questionable scientific claims, has become hegemonic and come to provide justification of an unequal society today.

A Darwinism Without Darwin 

Charles Darwin wrote to Francis Galton, his half-cousin, on 23rd December 1869, on reading the latter’s Hereditary Genius (not fully but the first 50 pages at the time of writing the letter, by Darwin’s own admission):


I do not think I ever in all my life read anything more interesting and original--and how well and clearly you put every point! ….. You have made a convert of an opponent in one sense, for I have always maintained that, excepting fools, men did not differ much in intellect, only in zeal and hard work; and I still think this is an eminently important difference.

In the book, Galton attempted a statistical analysis on the data from biographies and biographical dictionaries on great men to show that eminence was measurable and inheritable. Galton was not the first to study genius and personality: Raymond Cattell, one of the later leading lights of the field, would describe the development of ‘human knowledge about personality’ in three historical phases, starting with a Literary and Philosophical phase, followed by ‘proto-clinical phase’ of studies of the mental illness, and finally, the quantitative and experimental phase, a phase that Galton’s work ushered. His selection of ‘eminent men’ were questionable, and despite claims of objectivity, it was influenced by archetypes of Romantic Genius (Galton excluded a number of practical men of eminence, like the politicians and civil servants). But his work was groundbreaking in terms of its use of ‘multivariate analysis’, a statistical technique later perfected by Charles Spearman which combine different and multi-faceted observations in an aggregate measurement of a single cause.

Darwin was, however, no ‘convert’. His own thought about the nature of intelligence, as he pointed out in the letter, was in conflict with Galton’s idea of determination by inheritance. As Howard Gruber maintains, “(f)or adaptive behavioural change to precede and influence structural change, it is necessary that previously inherited structures do not completely determine behaviour.” However, beyond the questions of inheritance, there was a deep philosophical difference between Galton’s thesis about ‘genius’, which will underpin the Eugenics movement he would conceive, and Darwin’s ideas about ‘Struggle for Existence’ which is particularly relevant in the context of the present discussion.

While Darwinian theory used shared vocabulary with earlier thought about ‘Struggle for Existence’ and Darwin famously and self-reportedly got his inspiration from reading Malthus, Darwin’s conception of ‘struggle’ was diverged from these earlier ideas: That ‘struggle for existence’ works to preserve the ‘integrity’ of the species by weeding out the weak and benefitting the strong. This notion of ‘struggle’ preserving the basis of a species was, in fact, anti-evolutionary, supporting a view as maintained by John Crawford of Ethnological Society of London,”Nature, in some cases, takes some pains for preserving the integrity of the species but never for its improvement by mutation.” The Malthusian struggle for existence, and Spencer’s ideas about preservation of the ‘type’ of the species, were set in this tradition. Darwin, while using the shared vocabulary and metaphor, differed considerably from these earlier ideas:

In contrast to Spencer, Darwin thought evolution was more than the realisation of the archetype. By rooting the process of evolution in organic variations, he suggested that the notion of an ‘ideal’ or ‘type’ of a species, was, in any case, nonsense. The ‘unfit’, in the sense of the variation from a supposed archetype of a species, might very well become the successful progenitor of a new one. The species could only be measured against its ability to propagate its kind not against any idealised version of its essence and character.    

Galton understood that Darwinian ‘Struggle for Existence’ meant the poorer classes, ‘classes of a coarser organisation’ as he called them, would be favoured for their higher fertility rates, and his position was antithetical to Darwin’s idea of ‘the fittest’.  Darwin, on the other hand, while appreciating the role of intellect on human evolution, pointed out as erroneous that

there is some innate tendency towards continued development in mind and body. But development all kinds depends on many concurrent favourable circumstances. Natural selection acts only in a tentative manner. Individuals and races may have acquired certain indisputable advantages and yet have perished from failing in other characters.  

However, Galton and his colleague, Karl Pearson, remained committed to the Pre-Darwinian ideas of ‘Struggle for Existence’ improving the ‘type’ of the race. In many ways, Galton and the Eugenic movement belonged in the tradition of ‘cerebral physiology’ and phrenology of Gall, which ‘attempted to link moral and social behaviour with certain physical or physiological features of man’. Galton’s work was also deeply influenced by Adolphe Quetelet’s work, with the notions of observability and potentiality, and by the latter’s use of Statistical Techniques. Galton and Pearson refined these techniques and applied them in measuring ‘intellect’ - Galton focusing on the eminent people and Pearson studying the average individual - and pioneered ‘the multivariate experiment’ which studied many mental factors at once.  

Jones (1980) maintains that “Darwinism served in [Galton’s] work only to ‘modernise’ what even in the nineteenth century was regarded by many as an archaic pseudo-science of mind.” Despite Darwin’s acute realisation that Galton’s concept of the ‘fittest’ is an argument against his own ideas of ‘natural selection’ - in fact, Galton was arguing for suspension of natural selection within human societies - his own arguments in favour of differentiation between animals and human beings in terms of intellect in the Descent of Man, designed to appease many implacable enemies of ‘natural selection’ on the ground of human dignity, created an apparent ground on which Galtonian arguments could be launched. The deep philosophical difference on non-realisation of ‘species type’ through evolution is a moot point in public discussions when compared with the issue of primacy of Man due to ‘his intellect’. Galton’s work, therefore, remained within the tradition of Darwinian Science, forever entwining the Eugenics movement with the name and prestige of Darwin, influencing the later developments in the quest for intelligence both with its statistical techniques and its assumptions about who the ‘fittest’ may be.

Intelligence and Ordering of Society 

The standard tool for measuring intellect - Intelligence as it would now be called - was pioneered by a Frenchman, Alfred Binet, Director of the Psychology laboratory at the Sorbonne. Binet’s first attempts at measuring intelligence were in the lines of the Physical Anthropology school, and after Paul Broca’s methods of measuring skull sizes and correlating it with intellectual capacity. However, Binet’s initial studies only produced small differences, and he became aware that his own suggestibility - his measurement of brain sizes reduced when he became aware that the subject is less ‘intelligent’ - while doing these experiments.  In 1904, however, when Binet was commissioned by the Minister of Public Education to identify the children who needs special education in schools, Binet spurned craniometry and instead devised a set of tests designed to measure ‘mental age’ of the Children.  Binet’s tests were a series of defined tasks of progressive level of difficulty, and each level was associated with a mental age. The Children were assigned a mental age based on the highest level of tasks they could perform, and subtracting their mental age from their chronological age, Binet developed his famous ‘Scale’ (the Children with higher difference between their mental and chronological age needed most support). After Binet’s death, German psychologist W Stern modified the technique - dividing the mental age by chronological age rather than subtracting the former from the latter (an important difference, as now the Children with lowest mental age had the lowest score, rather than the highest as in Binet’s Scale) - and this new score was called the Intelligence Quotient, or IQ.


Binet was all too aware of the limitations of the tests he devised, and insisted on three principles regarding the use of his tests, as Gould (1981) summarises:

  1. The scores are a practical device; they do not buttress any theory of intellect. They do not define anything innate or permanent. We may not designate what they measure as “intelligence” or any other reified entity.
  2. The scale is a rough, empirical guide for identifying mildly retarded and learning-disabled children who need special help. It is not a device for ranking normal children.
  3. Whatever the cause of difficulty in children identified for help, emphasis shall be placed upon improvement through special training. Low scores shall not be used to mark children as innately incapable.

These principles were at risk immediately after Binet’s death, as in the naming of Intelligence Quotient which inverted the Scale, and they were completely lost on H H Goddard, who introduced Binet’s tests to America and ‘reified’ its scores as innate intelligence. Goddard, a former school-teacher who was by then a devoted Mendelian, was studying ‘feeble-mindedness’ through a series of field studies in collaboration with Elizabeth Kite. In 1912, Goddard wrote a book - The Kallikak Family: A Study in the Heredity of Feeble-mindedness -  to set forth his appeal to the public about improving the ‘racial type’ through identification of the feeble-minded and discouraging their propagation. This book, which will become one of the most popular tracts in American Eugenics, Goddard would bring together a host of ideas: “Binet’s Measurements; Mendel’s Laws; Galton’s calls for an experiment with natural controls; and Kite’s reports from the field.”  Goddard popularized Binet’s work in America, translating his works and advocating general use for Eugenic purposes. Ignoring Binet’s warnings though, Goddard used the scores as a measure of innate intelligence, and developed an ‘unilinear scale of intelligence’, classifying everybody but with the specific purpose of recognizing, limiting, segregating and curtailing breeding of ‘feeble-minded’. This was also to be used for Goddard’s idea of ‘democracy’, which meant
that the people rule by selecting the wisest, most intelligent and most human to tell them what to do to be happy. Thus democracy is a method for a truly benevolent aristocracy.

Goddard’s enduring legacy in the popular culture is a word that he invented to describe the ‘feeble-minded’, Moron, but by 1928, Goddard’s ideas had changed, and he, more in line with Binet, believed that feeble-mindedness is not incurable and that the feeble-minded did not need to be segregated in institutions.

While Goddard introduced the Binet’s tests to America, Lewis Terman, a Professor at Stanford, was its main populariser. Terman extended Binet’s tests to include ‘superior adults’ and created the new Stanford-Binet tests in 1916. Through testing and elimination, Terman created a standardized system where an average Child will be expected to score 100 (at which level, the mental age equals chronological age), with a standard deviation of 15. This test became the standard of all IQ testing ever since, with other test providers benchmarking their tests against Stanford-Binet without questioning its assumptions about how intelligence was defined and measured. The tests were extended to everyone, and it became hugely consequential in people’s lives, not just in terms of school choice or employment, but also in literal life and death matters: In some states, people with an IQ lower than 70 were exempted from capital punishment. Terman wanted to use these tests to bring “tens of thousands of these high grade defectives under the surveillance and protection of society. This will ultimately result in curtailing the reproduction of feeble-mindedness and in the elimination of an enormous amount of crime, pauperism and industrial inefficiency.” Terman promoted these tests as National Intelligence Tests and argued for universal testing, which, he argued, would eliminate vice and crime and save United States $500 million a year. Terman also studied Geniuses of the Past, not unlike Galton and Pearson, and created an eugenic visions of technocracy which is not a socially mobile one, but rather defined by existing class and race prejudices, which Terman accepted as a given, a result of innate intelligence rather than something that could be upset by testing.

Walter Lippmann, in a prescient critique of Terman’s endeavours, wrote:

The danger of the intelligence tests is that in a wholesale system of education, the less sophisticated or the more prejudiced will stop when they have classified and forget that their duty is to educate. They will grade the retarded child instead of fighting the causes of his backwardness. For the whole drift of the propaganda based on intelligence testing is to treat people with low intelligence quotient as congenitally and hopelessly inferior.

Lippmann’s fears have been realised, as is seen in The Atlantic article quoted above. However, Terman’s idea of universal testing was only partially realised through the endeavours of Robert M Yerkes, who convinced the United States Army to use IQ Tests for all its new recruits during the First World War, testing over 1.75 million people and classifying them, according to their IQ, to frontline or officer roles. The Army IQ Tests, despite being flawed institution because of its questionable methods and doubtful outcome (an average mental age of 13, for example) , were politically significant and was used to argue for restricted immigration for certain types of people (Eastern and Southern Europeans). However, the vastness of the Army IQ Tests produced enough data for its leading practitioners to reconcile their theories with empirical evidence, and despite several attempts, some sincere and others ingenious, many of them came to reverse their positions on the views they had earlier defended. One great example of this was C C Brigham, an Assistant Professor of Psychology at Princeton and one of Yerkes’ key associates in Army IQ Tests. Brigham wrote a book, A Study of American Intelligence, attempting to use racial arguments to justify the results of Army IQ Tests, only to recant it later and argue

Most psychologists working in the test field have been guilty of a naming fallacy which easily enables them to slide mysteriously from the score in the test to the hypothetical faculty suggested by the name given to the test. Thus, they speak of sensory discrimination, perception, memory, intelligence, and the like while the reference is to a certain objective test situation.

At the finest hour of IQ testing, its leading practitioners discovered an uncomfortable truth: Instead of being an objective reality that the practitioners believed that existed, ‘Intelligence’ is whatever the intelligence tests are measuring, as E G Boring, one of Yerkes’ key assistants, famously stated.

‘A Natural Aristocracy Among Men’ 

When James Bryant Conant, President of Harvard, wanted to reform Harvard in his first academic year of 1933-34, he set out to create a new kind of scholarship. Until this time, Harvard was a bastion of rich young students from private schools in New England, who lived in private apartments, often with full retinue of servants and other attendants. The scholarships Harvard offered did not include accommodation, and were based on financial as well as academic criteria: This meant most scholarship students were day scholars from Boston, who lived with their parents, and often had to leave the college if the academic performance did not meet the criteria. Conant wished to change this, and attract best students from all over the country rather than New England. To achieve this, he wanted to design a new, full Four-Year scholarship that included Room and Board, with minimal conditions and no work requirements. Conant wanted to change the idea of scholarship from a ‘badge of poverty’ to a ‘badge of honour’: A rich student, if he won the scholarship, would be declared the winner but would be given no money.


The problem with Conant’s expansive vision was in the way Harvard selected its students then: Through a set of tests set by College Entrance Examination Board, which focused exclusively on the mastery of the New England Boarding School curriculum. They were unusable for selecting public school students from Midwest, who Conant wanted to bring to Harvard. He, therefore, had to set the task of finding an appropriate test for two of Harvard’s Assistant Deans, Henry Chauncey, the future President of Educational Testing Service and the face of SAT in America, and Wilbur J Bender. Chauncey was already a convert to the idea of testing, particularly after attending a lecture at Harvard by William Learned, who was conducting an Eight Year study on behalf of Carnegie Foundation for Advancement of Teaching in the Pennsylvania School System. Learned was not an IQ tester, but rather a believer of standardised achievement testing, which was effective within a particular schooling district, or even perhaps a state schooling system. Chauncey, however, wanted to create a Census of Abilities - a test for aptitude rather than achievement - and this led Chauncey and Bender to Carl Brigham, the associate of Yerkes in Army IQ Tests.

After the war, when Army IQ Testing was over, the new market for IQ Testing was schools and other educational institutions. Brigham and his colleagues metamorphosed the Army IQ Test for this new market, re-labeling it Scholastic Aptitude Test (SAT), basing it on the claim that these intelligence tests have a higher level of validity - the ability to predict outcome, in this case, the first year academic performance - than the College Board tests. The IQ testers claimed that intelligence testing would have a validity of .60, an ability to predict performance with 60% accuracy, whereas the College Board tests only had a validity of .20: However, when these claims failed to materialise, Brigham pointed to ‘social distractions’ of high-living Princeton and Yale students to explain away the difference. The College Board and the Army started using SAT from 1926 for admissions in West Point and other institutions run by the Navy.

By the time Harvard started speaking to Brigham about using the SAT, Brigham had a change of heart about IQ Testing, and formally retracted the claims he made in his popular tract, A Study of American Intelligence. Furthermore, Brigham published a second book in 1932, named, A Study in Error, and was privately writing

The test scores very definitely are a composite including schooling, family background, familiarity with English and everything else. The “native intelligence” hypothesis is dead

Brigham wanted to use SAT as a ready method of interview, rather than a test of native intelligence. But Conant, who was not an Eugenicist, believed in native intelligence nonetheless, and Harvard would adopt SAT for its new scholarship examination in 1934, gradually expanding the reach of the ‘Conant Prize’.
 
Conant’s vision, however, was far more expansive than the new Scholarship programme at Harvard, and he wanted to, as he pointed out in his essay in Harper’s Magazine in 1938, “The Future of Our Higher Education”. Conant drew his inspiration from Thomas Jefferson, more specifically a letter Jefferson wrote to John Adams in 1813:

For I agree with you that there is a natural aristocracy among men...There is also an artificial aristocracy founded on wealth and birth, without either virtue or talents;.... The natural aristocracy I consider as the most precious gift of nature for the instruction, the trusts, and government of society. And indeed it would have been inconsistent in creation to have formed man for the social state, and not to have provided virtue and wisdom enough to manage the concerns of the society.

Conant disregarded Adams’ alarmed response - “Your distinction between natural and artificial Aristocracy does not appear to me well founded...I only say that Mankind have not yet discovered any remedy against irresistible Corruption in Elections to Offices of great Power and Profit, but making them hereditary” - as his ideas were also deeply influenced by Frederick Jackson Turner, Historian of the Frontier, whose central idea, that once the open frontiers of the American West was settled into and expansion of opportunity had disappeared, the American society would atrophy into European style class society without social mobility. In Conant’s view, public education is the way to maintain the vitality of the American society. Conant wanted - as he wrote in a later, one of his more radical essays - government to confiscate all property from time to time, and unseat the traditional elite through a new elite chosen democratically through testing and education.

The cornerstone of Conant’s idea to bring about the ‘Natural Aristocracy’ was a merger of all Test agencies and creation of a single, national, test provider administering aptitude tests for college admissions. Ironically, it was Carl Brigham who was standing on the way, who, by then, had become opposed to testing as a sorting device. On January 3, 1938, Brigham wrote to Conant a remarkable letter, calling the Army IQ Tests ‘atrocious’ and painting a dystopian picture of the day when Intelligence Testing would be ubiquitous:

If the unhappy day ever comes when the teachers point their students towards these newer examinations, and the present weak and restricted procedures get a grip on education, then we may look for the inevitable distortion of education in terms of tests.

Brigham would, however, pass away in 1943, at an age of fifty-two, and the unified testing agency he so opposed would come into being in the form of Educational Testing Service (ETS), with Henry Chauncey as its President and James Bryan Conant as its Chair of the Board of Trustees, on 1st January 1948. ETS would take over all College Board examinations and ACE Psychological Examination, which was SAT’s main challenger, was discontinued by American Council of Education soon thereafter. Though other, local and For Profit, Test Providers would continue to operate and compete with SAT in different regional markets, SAT’s pole position was guaranteed by its Ivy League credentials and government endorsements.

Despite a shaky start, ETS’ financial position would also be guaranteed by another nationwide testing operation by the United States Military - the Selective Service System - which contracted ETS to administer SAT nationwide for students in college. The idea was to leave students above a certain cut-off score in college and allow them to defer the draft, but send others out for Military training. This was a test of enormous consequence, with great public opposition, from including none other than Conant, who believed in universal military training, without exceptions. ETS would adopt clever public relations techniques, insisting that the SAT is not an ‘intelligence test’ but rather of ‘Scholastic Aptitude’, and changing the presentation of the cut-off score to 50 to remind the test-takers of school grade, rather than the IQ. These tests put ETS and SAT firmly into public imagination, but more so, these were enormously profitable for ETS and secured itself financially.

However, the final ‘victory’ of SAT had to wait till 1958, when Clark Kerr, already a member of the Board of Trustees of ETS, became the President of the University of California, the largest State University System in the United States. From 1958, ETS started offering SAT at no cost to University of California applicants. In 1959, UC system required all out of state applicants to take the test. But in 1962, the University dropped SAT altogether, only to embrace it back again when, in the wake of the California Higher Education Master Plan, which restricted university education to the top eighth of High School graduates, grade inflation took hold in High Schools. The University stopped accrediting High Schools from 1963, and by 1968-9, SAT was required for all applicants to the university.

Conclusion: “An Oligarchy of Brains’ 

In 2015, more than 1.7 million students took the SAT examination, with another 3.8 million taking PSAT, its practice version. However, at the same time, SAT has acquired such an aura that companies ask for SAT scores even for senior employees, sometimes in their 40s and 50s, who would have taken SAT years ago. Accepting SAT as an admission criteria is no longer a choice universities and colleges can make on their own; the US News & World Report’s Annual College Rankings, the guide Middle Class parents in America depend on for school choice, automatically downgrades an institution on student selectivity, an important criteria, if the institution does not ask for SAT Scores.


This acceptance does not mean that the SAT has proved itself to be accurate in predicting the test-takers ‘scholastic’ performance. Quite the contrary: Its validity, ability to predict academic performance, has dwindled: One study has put SAT’s ability to predict grades at about 15%. Its name change - from Scholastic Aptitude Test to Scholastic Assessment Test to the current SAT Reasoning Test - was primarily to reflect the modesty of its claim, though the popular acronym, SAT, was always maintained to be the shorthand of ‘merit’ in public perception. Lani Guinier (2015) called SAT “the Wealth Test”, and this is well reflected in the chart below:
Gross Annual Family Income                             
Average SAT Score (Out of 2400) For 2013 College Bound Seniors
$0 - $20,000
1326
$20,000 - $40,000
1402
$40,000 - $60,000
1461
$60,000 - $80,000
1497
$80,000 - $100,000
1535
$100,000 - $120,000
1569
$120,000 - $140,000
1581
$140,000 - $160,000
1604
$160,000 - $200,000
1625
More than $200,000
1714

This may be so not just because of the inherent bias in the tests which Brigham was so acutely aware of, but also because of an enormous test preparation industry that has grown around the SAT. The pioneer in this was Stanley Kaplan, the Jewish entrepreneur who debunked ETS’ claim that SAT was not coachable by building a billion-dollar test-prep business around it. Wealthy parents today, considering SAT to be a fail-safe ticket in education and career, spend $20,000 to $30,000 a year on SAT preparation for their children. The SAT has also become a racial sorting mechanism, with African-Americans averaging a score of 1278 against White students’ 1576 (out of 2400, 2013 data).

In conclusion, the SAT appears to have become the trojan horse of Eugenics in the society, embedded within Jefferson’s, and Conant’s, lofty dream of ‘Natural Aristocracy’, somewhat reaffirming Adams’ weary scepticism about human tendency to make advantages hereditary. Economist Gregory Clark, in an inversion of Galton’s method, showed how wealth has remained large hereditary over the last eight centuries, undermining the moral claim of ‘meritocracy’. However, regardless of the evidence, the emergence of a ‘Cognitive Elite’ is now celebrated, and the arguments about ‘love and marriage by IQ’ are respectable again.

Yet, technological change, and consequent disruption of the middle class life, opens up new questions about human abilities: Research by psychologist Carol Dweck, for example, shows that children growing up assuming that ‘intelligence’ is a fixed and biological attribute (‘Fixed Mindset, Dr Dweck calls it) are less able to cope with change and uncertainty than the children growing up believing in zeal and perseverance (‘Growth Mindset’). Lani Guinier argues that the ‘testocratic merit’ has undermined ‘democratic merit’, urgently needed for collaboration and conversations much needed at a time when the quest for technological advancement may become self-defeating. In the end, Darwin may have had his point: In the quest of becoming good at one thing, we may have ignored other things crucial for our advancement, or even, survival.

(5425 Words)


Bibliography 

Primary Sources 

Darwin Correspondence Project, “Letter no. 7032,” http://www.darwinproject.ac.uk/DCP-LETT-7032 Accessed on 30th April 2017


Thomas Jefferson to John Adams, Letter, 28th October 1813, The Founder’s Constitution, Chapter 15, Document 61, from http://press-pubs.uchicago.edu/founders/documents/v1ch15s61.html

John Adams to Thomas Jefferson, Letter, 15th November 1813, The Founder’s Constitution, Chapter 15, Document 62, from http://press-pubs.uchicago.edu/founders/documents/v1ch15s62.html

Secondary Sources 
Books 
Cattell, Raymond B, The Scientific Analysis of Personality, Penguin Books, London, 1965
Clark, G, The Son Also Rises, Princeton University Press, Princeton, NJ, 2015
Dweck, Carol Mindset: How Can You Fulfil Your Potential, (Random House, New York), 2006
Gould, SJ, The Mismeasure of Man, W W Norton, New York, 1981
Gruber, H, Darwin On Man, E P Dutton and Co. New York, 1974
Herrnstein, C and Murray, C, The Bell Curve: Intelligence and Class Structure in American Life, Free Press, New York, 1994
Jones, G, Social Darwinism and English Thought, The Harvester Press, Sussex, 1980
Lemann, Nicholas, The Big Test, Farrar, Straus And Giroux, New York, 1999
Shurkin, J N, Terman’s Kids, Little Brown and Company, Boston, 1992
Young, M, The Rise of Meritocracy 1870 - 2033, An Essay on Education and Equality, Thames and Hudson, London, 1958
Zenderland, L, Measuring Minds: Henry Herbert Goddard and the Origins of American Intelligence Testing, Cambridge University Press, Cambridge, 1998



Websites and Periodicals 
Freedman, D, The War on Stupid People, The Atlantic, July/August 2016  Accessed From : https://www.theatlantic.com/magazine/archive/2016/07/the-war-on-stupid-people/485618/ Accessed on 30th April 2017

Klein, R , More Students Are Taking SAT, Even As The Scores Fail to Improve, Huffpost Politics, Setember 4, 2015 Accessed from http://www.huffingtonpost.com/entry/2015-sat-results_us_55e751c6e4b0c818f61a56ce Accessed on 30th April 2017

Korn, M, Job Hunting: Dig Up The Old SAT Scores?, Wall Street Journal, March 25 2014 , Accessed from https://www.wsj.com/articles/job-hunting-dig-up-those-old-sat-scores-1393374186 Accessed on 30th April 2017

Popular Posts

How To Live

"Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those poor spirits who neither enjoy nor suffer much because they live in the grey twilight that knows neither victory nor defeat."

- Theodore Roosevelt

Last Words

We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.

- T S Eliot

Creative Commons License

AddThis