Category Archives: education

Mathematics: essential learning?

Are there things everyone should be required to learn? If so, what are they?

 page of logarithms from the Handbook of Chemistry and Physics, 44th edition, 1962-1963

There are lots of things that are useful to know or be able to do. Reading and writing are fundamental. Knowing how to count, add and subtract. Grammar can be useful, and spelling too. So is recognising street signs. The list could go on.

These are things that are useful to know, but they are not identical to things students have to study. In high school in the US, I had to take two years of a foreign language in order to get into a good university. French was my worst subject. Then, at Rice University, I had to take two years of a language to graduate, even though my major was physics. I chose German this time around, and despite studying hard, was lucky to pass. For me, studying foreign languages was challenging, and I retained little of what I learned.

I vaguely remember some of the things learned in school mathematics classes, like interpolating in a table of logarithms. To multiply or divide numbers, we would look up the logarithm of each number, add or subtract the logarithms and then find the number corresponding to the result. For greater accuracy, we would interpolate in the tables, namely estimate the number between two entries in the table.

I learned how to use a slide rule, which is basically two rulers with logarithmic scales that can be used to multiply and divide. I remember in year 8 daring to use my slide rule in an exam, and then checking the result by calculating it longhand.

These skills became outdated decades ago, after the introduction of pocket calculators. No one says today that anyone should have to learn how to interpolate in tables of logarithms or to use a slide rule. Most young people have never heard of a slide rule.

Some knowledge becomes obsolete and other knowledge is never used. So is there anything that everyone must study and learn?

The math myth

These reflections are stimulated by Andrew Hacker’s new book The Math Myth. He is greatly disturbed by the requirement that all US students must study math (or maths as we say in Australia) to a level far beyond what is required in most people’s lives and jobs.

Hacker, a political scientist at Queens College in New York City, actually loves maths, and shows his knowledge of the field by dropping references to polynomials and Kolmogorov equations. He is ardent in his support of learning maths, primarily arithmetic (requiring addition, subtraction, multiplication and division) and practical understanding of real world problems. His target for criticism is requirements for learning algebra, trigonometry and calculus that damage the morale and careers of many otherwise capable students.

Andrew Hacker

In the US, according to Hacker, the most common reason students fail to complete high school or university is a maths requirement. Everyone has to pass maths courses, and learn how to solve quadratic equations, whether they are going to become a hairdresser, truck driver or ballet dancer. His argument is that many people have talents they are prevented from fully developing because of an absurd requirement to pass courses in mathematics. Even when students pass, many of them quickly forget what they learned because they never use it.

Hacker makes a bolder claim. He says that in many professions in which maths might seem essential, actually most practitioners use only arithmetic. This includes engineering. Hacker interviewed many engineers who told him that they never needed to solve algebraic equations or use trigonometric functions.

On the flip side, Hacker cites studies of some occupations, like carpet laying, in which workers in essence solve difficult equations, but they do it in a way passed down from experienced workers. The irony is that many of these workers never passed the maths classes mandated for finishing high school.

The resulting picture is damning. Millions of students struggle through maths classes, some of them falling to the wayside, others developing maths anxiety, yet few of them ever use the knowledge presented in these classes.

Why maths requirements?

How has this situation arisen? Hacker puts the blame on leaders of the mathematics profession, mostly elite pure mathematicians, who sit on panels that advise on high school and university syllabuses. Few of these research stars have any expertise in teaching, and indeed few of them spend much time with beginning students. Not only do they seldom visit a high school classroom, but most avoid teaching large first-year university maths classes. Educational administrators defer to these gurus rather than consulting with teachers who actually know what is happening with students.

It might be argued that being able to do well in maths is a good indicator of doing well in other subjects. Perhaps so, but this is not a good argument for imposing maths on all students. Research on expert performance shows that years of dedicated practice are required to become extremely good at just about any skill, including music, sports, chess and maths. The sort of practice required, called deliberate practice, involves focused attention on challenges at the limits of one’s ability. This sort of practice can compensate for and indeed supersede many shortcomings in so-called general intelligence. In other words, you don’t need to be good at maths to become highly talented in other fields.

Hacker argues that the test most commonly used for entry to US universities, the SAT, is unfairly biased towards maths, to the detriment of students with other capabilities. Not only do maths classes screen out many students with talents in other areas, but selection mechanisms for the most prestigious universities, whose degrees are tickets to lucrative careers, unfairly discriminate against those whose interests and aptitudes are in other areas.

Education as screening

Hacker’s analysis of maths is compatible with a wider critique of education as a screening mechanism. Randall Collins in his classic book The Credential Society argued that US higher education served more to justify social stratification than to stimulate learning. In other words, students go through the ritual of courses, and those with privileged backgrounds have the advantage in obtaining degrees that give them access to restricted professions.

In another classic critique, Samuel Bowles and Herbert Gintis in Schooling in Capitalist America argued that schooling reproduces the class structure. Their Marxist analysis gives the same general conclusion as Collins’ approach. Then there is The Diploma Disease by Ronald Dore, who described education systems worldwide, but especially in developing countries, as irrelevant in terms of producing skills that can be applied in jobs.

Schooling, up to teenage years, remains one of the few compulsory activities in contemporary societies, along with taxation. (In some countries, military service, jury duty and voting are compulsory.) There is no doubt that education can be a liberating process in the right circumstances, but for many it is drudgery with little compensating benefit, aside from obtaining a certificate needed for obtaining a job, while what is learned has little practical relevance.

A different system would be to set up entry processes to occupations, ones closely related to actual skills used in practice. Exams and apprenticeships are examples. Attendance at schools and universities then would be optional, chosen for their value in learning. There is one big problem: attendance would plummet.

Some teachers set themselves the task of stimulating a love of learning. Rather than trying to convey particular facts and frameworks, they see that learning facts and frameworks is a way of learning how to learn. The ideal in this picture is lifelong learning.

The trouble with schooling systems is that they undermine a love of learning by imposing syllabi and assessments. Students, rather than studying a topic because they are fascinated by it, instead learn that studying is tedious and to be avoided, and only undertaken under the whip of assessment.

How many students do you know who keep studying after the final exam? On the other hand, people who are passionate about a topic will put in hours of concentrated effort day after day in a quest for improvement and in the engaged mental state called flow.

The paradox of educational systems is that they are designed to foster learning yet, by subjecting students to arbitrary requirements, can actually hinder learning and create feelings of inadequacy. The more that everyone is put through exactly the same hoops — the same learning tasks at the same time — the more acute the paradox.

A different sort of education

Taking this argument a step further leads to a double implication. Education should be designed around the needs of individual students, as attempted in free schools and in some forms of home schooling. The second implication is that work should be designed around the jointly articulated needs of workers and consumers. Rather than students having to compete for fixed job slots, instead work would be reorganised around the freely expressed needs and capacities of workers and local communities.

Whether this ideal could ever be reached is unknown, but it nonetheless provides a useful goal for restructuring education — including maths education. This brings us back to Hacker’s The Math Myth. There are two sides to his argument. The first, as I’ve described it, is that US maths requirements are damaging because few people ever need maths beyond arithmetic and the requirements screen talented people out of careers where they could make valuable contributions.

The second element in Hacker’s argument is that for the bulk of the population, there are useful things to learn about maths and that these can be made accessible using a practical problem-solving approach. To show what’s involved, Hacker describes a course he taught in which students tackled everyday challenges.

Hacker’s course shows his capacity for innovative thinking. The Math Myth is not an attack on mathematics. Quite the contrary. Hacker wants everyone to engage with maths by designing tasks that relate to their lives.

Whether Hacker’s powerful critique will lead to changes in US educational requirements remains to be seen. Although Hacker talks only about pointless maths requirements, his arguments challenge the usual basis for screening that helps maintain social inequality. If maths cannot be used to legitimise inequality in educational outcomes, what will be the substitute?

Whether you respond to maths with affection or anxiety, it’s worth reading The Math Myth and thinking about its implications.

Brian Martin

Daily data: be sceptical

Be careful about data you encounter every day, especially in the news.


If you watch the news, you are exposed to all sorts of numbers, intended to provide information. Some might be reliable, such as football scores, but with others it’s harder to know, for example the number of people killed in a bomb attack in Syria, the percentage of voters supporting a policy, the proportion of the federal budget spent on welfare, or the increase in the average global temperature.

Should you trust the figures or be sceptical? If you want to probe further, what should you ask?

To answer these questions, it’s useful to understand statistics. Taking a course or reading a textbook is one approach, but that will mainly give you the mathematical side. To develop a practical understanding, there are various articles and books aimed at the general reader. Demystifying Social Statistics gives a left-wing perspective, a tradition continued by the Radstats Group. Joel Best has written several books, for example Damned Lies and Statistics, providing valuable examinations of statistics about contested policy issues. The classic treatment is the 1954 book How to Lie with Statistics.

Most recently, I’ve read the recently published book Everydata by John H. Johnson and Mike Gluck. It’s engaging, informative and ideal for readers who want a practical understanding without encountering any formulas. It is filled with examples, mostly from the US.


            You might have heard about US states being labelled red or blue. Red states are where people vote Republican and blue states are where people vote Democrat. Johnson and Gluck use this example to illustrate aggregated data and how it can be misleading. Just because Massachusetts is a blue state doesn’t mean no one there votes Republican. In fact, quite a lot of people in Massachusetts vote Republican, just not a majority. Johnson and Gluck show pictures of the US with the data broken down by county rather than by state, and a very different picture emerges.

ed, blue and in-between states

            In Australia, aggregated data is commonly used in figures for economic growth. Typically, a figure is given for gross domestic product or GDP, which might have grown by 2 per cent in the past year. But this figure hides all sorts of variation. The economy in different states can grow at different rates, and different industries grow at different rates, and indeed some industries contract. When the economy grows, this doesn’t mean everyone benefits. In recent decades, most of the increased income goes to the wealthiest 1% and many in the 99% are no better off, or go backwards.

The lesson here is that when you hear a figure, think about what it applies to and whether there is underlying variation.

In the Australian real estate market, figures are published for the median price of houses sold. The median is the middle figure. If three houses were sold in a suburb, for $400,000, $1 million and $10 million, the median is $1 million: one house sold for less and one for more. The average, calculated as total sales prices divided by the number of sales, is far greater: it is $3.8 million, namely $0.4m + $1m + $10m divided by 3.

The median price is a reasonable first stab at the cost of housing, but it can be misleading in several ways. What if most of those selling are the low-priced or the high-priced houses? If just three houses sold, how reliable is the median? If the second house sold for $2 million rather than $1 million, the median would become $2 million, quite a jump.

sydney-houses sydney-house-expensive
Is the average or median house price misleading?

            In working on Everydata, Johnson and Gluck contacted many experts and have used quotes from them to good effect. For example, they quote Emily Oster, author of Expecting Better: Why the Conventional Pregnancy Wisdom is Wrong, saying “I think the biggest issue we all face is over-interpreting anecdotal evidence” and “It is difficult to force yourself to ignore these anecdotes – or, at a minimum, treat them as just one data point – and draw conclusions from data instead.” (p. 6)

Everydata addresses sampling, averages, correlations and much else, indeed too much to summarise here. If Johnson and Gluck have a central message, it is to be sceptical of data and, if necessary, investigate in more depth. This applies especially to data encountered in the mass media. For example, the authors comment, “We’ve seen many cases in which a finding is reported in the news as causation, even though the underlying study notes that it is only correlation.” (p. 46) Few readers ever check the original research papers to see whether the findings have been reported accurately. Johnson and Gluck note that data coming from scientific papers can also be dodgy, especially when vested interests are involved.

The value of a university education

For decades, I’ve read stories about the benefits of a university education. Of course there can be many sorts of benefits, for example acquiring knowledge and skills, but the stories often present a figure for increased earnings through a graduate’s lifetime.


            This is an example of aggregated data. Not everyone benefits financially from having a degree. If you’re already retired, there’s no benefit.

There’s definitely a cost involved, both fees and income forgone: you could be out earning a salary instead. So for a degree to help financially, you forgo income while studying and hope to earn more afterwards.

The big problem with calculations about benefits is that they don’t compare like with like. They compare the lifetime earnings of those who obtained degrees to the lifetime earnings of those who didn’t, but these groups aren’t drawn randomly from a sample. Compared to those who don’t go to university, those who do are systematically different: they tend to come from well-off backgrounds, to have had higher performance in high school and to have a greater capacity for studying and deferred gratification.

Where’s the study of groups with identical attributes, for example identical twins, comparing the options of careers in the same field with and without a degree? Then there’s another problem. For some occupations, it is difficult or impossible to enter or advance without a degree. How many doctors or engineers do you know without degrees? It’s hardly fair to calculate the economic benefits of university education when occupational barriers are present. A fair comparison would look only at occupations where degrees are not important for entry or advancement, and only performance counts.

A final example

For those who want to go straight to takeaway messages, Johnson and Gluck provide convenient summaries of key points at the end of each chapter. However, there is much to savour in the text, with many revealing examples helping to make the ideas come alive. The following is one of my favourites (footnotes omitted).


Americans are bad at math. Like, really bad. In one study, the U.S. ranked 21st out of 23 countries. Perhaps that explains why A&W Restaurants’ burger was a flop.

As reported in the New York Times Magazine, back in the early 1980s, the A&W restaurant chain wanted to compete with McDonald’s and its famous Quarter Pounder. So A&W decided to come out with the Third Pounder. Customers thought it tasted better, but it just wasn’t selling. Apparently people thought a quarter pound (1/4) was bigger than a third of a pound (1/3).

Why would they think 1/4 is bigger than 1/3? Because 4 is bigger than 3.

Yes, seriously.

People misinterpreted the size of a burger because they couldn’t understand fractions. (p. 101)

John H. Johnson

Mike Gluck

John H. Johnson and Mike Gluck, Everydata: The Misinformation Hidden in the Little Data You Consume Every Day (Brookline, MA: Bibliomotion, 2016)

Brian Martin

Learning from failure


Imagine you are a teacher and you decide to try an innovative teaching technique. However, it goes horribly wrong. The technique didn’t work the way you expected, and furthermore numerous students make complaints to your supervisor. Luckily, your supervisor is sympathetic to your efforts and your job is secure.

What do you do next?

  1. Avoid innovative techniques: they’re too risky.
  2. Keep innovating, but be much more careful.
  3. Tell a few close colleagues so they can learn from your experience.
  4. Write an article for other teachers telling what went wrong, so they can learn from your experience.
  5. Invite some independent investigators to analyse what went wrong and to write a report for others to learn from.

The scenario of innovative teaching gone wrong has happened to me several times in my decades of teaching undergraduates. Each time, through no particular fault of my own, what I attempted ended up disastrously. It even happened one time when I designed a course that worked brilliantly one year but failed miserably the next.


So what did I do? Mainly options 2 and 3: I kept innovating, more carefully, and told a few colleagues. I never imagined writing about these teaching disasters, even using a pseudonym, much less inviting others to investigate and publish a report. It would be humiliating, might invite additional unwanted scrutiny, and might even make innovation more difficult in the future.

Aviation: a learning culture

These thoughts came to mind as a result of reading Matthew Syed’s new book Black Box Thinking. The title refers to the flight recorders in commercial aircraft, called black boxes, that record data about the flight, including conversations among the pilots. When there is a crash or a near miss, these boxes are vital for learning from the failure. Rather than automatically blaming the pilots, an independent team of experts investigates accidents and incidents and publishes its findings so the whole industry can learn from what happened.


Some of the greatest improvements in aircraft safety have resulted from studies of disasters. The improvement might be redesigning instruments so confusion is less likely or changing protocols for interactions between pilots. One important lesson from disasters is that the flight engineer and co-pilot need to be more assertive to prevent the pilot from losing perspective during tense situations. The investigations using black-box information occasionally end up blaming pilots, for example when they are drunk, but usually the cause of errors is not solely individual failure, but a combination of human, procedural and technical factors.

Cover-up cultures: medicine and criminal justice

Syed contrasts this learning culture in aviation with a culture of cover-up in medicine. There is a high rate of failure in hospitals, and indeed medical error is responsible for a huge number of injuries and deaths. But, as the saying goes, surgeons bury their mistakes. Errors are seldom treated as opportunities for learning. In a blame culture, everyone seeks to protect their jobs and reputations, so the same sorts of errors recur.

Syed tells about some hospitals in which efforts are made to change the culture so that errors are routinely reported, without blame attached. This can quickly lead to fixing sources of error, for example by differently labelling drugs or by using checklists. In these hospitals, reported error rates greatly increase because cover-up is reduced, while actual harm due to errors drops dramatically: fewer patients are harmed. Furthermore, costs due to patient legal actions also drop, saving money.


So why don’t more hospitals follow the same path? And why don’t more occupations follow the example of aviation? Syed addresses several factors: cultures of blame, excess power at the top of organisations, and belief systems resistant to testing.

In the criminal justice system, one of the most egregious errors is convicting an innocent person of a crime. Police and prosecutors sometimes decide that a particular suspect is the guilty party and ignore evidence to the contrary, or don’t bother to find any additional evidence. Miscarriages of justice are all too common, yet police, prosecutors and judges are reluctant to admit it.

In some cases, after a person has been convicted and spent years in jail, DNA evidence emerges showing the person’s innocence. Yet in quite a few cases, the police involved in the original investigation refuse to change their minds, going through incredible intellectual contortions to explain how the person they charged could actually be guilty. Syed comments, “DNA evidence is indeed strong, but not as strong as the desire to protect one’s self-esteem.” (p. 89)

Black boxes

When I heard about Black Box Thinking, I decided to buy it because I had read Matthew Syed’s previous book Bounce, about which I wrote a comment. Syed was the British table tennis champion for many years and became a media commentator. Bounce is a popularisation of work on expert performance, and is highly engaging. In Black Box Thinking, Syed has tackled a related and broader subject: how to achieve high performance in collective endeavours.

Matthew Syed

The title had me confused at first, because in other disciplines a black box refers to a system whose internal mechanisms are hidden: only inputs and outputs can be observed. In contrast, flight recorders in aircraft, which actually are coloured orange, not black, are sources of information.

Syed’s book might have been titled “Learning from failure,” because this is the theme throughout his book. He presents stories from medicine, aviation, business, criminal justice, sport and social policy, all to make the point that failures should be treated as opportunities for learning rather than assigning blame. Individuals can heed Syed’s important message, but bringing about change in systems is another matter.

Another theme in the book is the importance of seeking marginal gains, namely small improvements. Syed tells about Formula One racing in which tiny changes here and there led to superior performance. Another example is when the company Unilever was manufacturing soap powder – laundry detergent – and wanted to make the powder come out of the nozzle more consistently.

Unilever’s initial nozzle

Unilever hired a group of mathematicians, experts in fluid dynamics and high pressure systems, to come up with an answer, but they failed. Unilever then hired a group of biologists – yes, biologists – who used a process modelled on evolution. They tried a variety of designs and determined which one worked best. Then they took the best performing design and tested slight modifications of it. Applying this iterative process repeatedly led to a design that worked well but never could have been imagined in advance.

Unilever’s final nozzle, after 45 trial-and-error iterations

Learning from mistakes in science

Syed presents science as a model for learning from error, seeing the experimental method as a great advance over adherence to dogma. Science certainly has led to revolutionary changes to human understanding and, in tandem with technology, to dramatic improvements in human welfare, as well as to unprecedented threats to human life (nuclear weapons and climate change). However, Syed notes that science students mainly study the latest ideas, with little or no time examining “failed” theories such as aether or astrology: “By looking only at the theories that have survived, we don’t notice the failures that made them possible.” (p. 52).

Even so, overall Syed’s view of science is an idealistic image of how research is supposed to work by continually trying to falsify hypotheses. Historian-of-science Thomas Kuhn argued in The Structure of Scientific Revolutions that most research is problem-solving within a framework of unquestioned assumptions called a paradigm. Rather than trying to falsify fundamental assumptions, scientists treat them as dogma. Sociologist Robert Merton proposed that science is governed by a set of norms, one of which is “organised scepticism.” However, the relevance of these norms has been challenged. Ian Mitroff, based on his studies, proposed that science is equally well described by a corresponding set of counter-norms, one of which is “organised dogmatism.”


Although science is incredibly dynamic due to theoretical innovation and experimental testing, it is also resistant to change in some ways, and can be shaped by various interests, including corporate funding, government imperatives and the self-interest of elite scientists.

Therefore, while there is much to learn from the power of the scientific method, there is also quite a bit that scientists can learn from aviation and other fields that learn systematically from error. It would be possible to examine occasions when scientists were resistant to new ideas that were later accepted as correct, for example continental drift, mad cow disease or the cause of ulcers, and spell out the lessons for researchers. But it is hard to find any analyses of these apparent collective failures that are well known to scientists. Similarly, there are many cases in which dissident scientists have had great difficulty in challenging views backed by commercial interests, for example the scandals involving the pharmaceutical drugs thalidomide and Vioxx. There is much to learn from these failures, but again the lessons, whatever they may be, have not led to any systematic changes in the way science is carried out. If anything, the subordination of science to powerful groups with vested interests is increasing, so there is little incentive to institutionalise learning from disasters.


Failure: still a dirty word

Although Syed is enthusiastic about the prospects of learning from failure, he is very aware of the obstacles. Although he lauds aviation for its safety culture, in one chapter he describes how the drive to attribute blame took over and a conscientious pilot was pilloried. Blaming seems to be the default mode in most walks of life. In politics, assigning blame has become an art form: opposition politicians and vulnerable groups are regularly blamed for society’s problems, and it is a brave politician indeed who would own up to mistakes as a tool for collective learning. In fact, political dynamics seem to operate with a different form of learning, namely on how to be ever more effective in blaming others for problems.


I regularly hear from whistleblowers in all sorts of occupations: teachers, police, public servants, corporate employees and others. In nearly every case, there is something going wrong in a workplace, a failure if you want to call it that, and hence a potential opportunity to learn. However, organisational learning seems to be the least likely thing going on. Instead, many whistleblowers are subject to reprisals, sending a message to their co-workers that speaking out about problems is career suicide. Opportunities for learning are regularly squandered. Of course, I’m seeing a one-sided perspective: in workplaces where failure does not automatically lead to blame or cover-up, there is little need for whistleblowing. When those who speak out about problems are encouraged or even rewarded, no one is likely to contact me for advice. Even so, it would seem that such workplaces are the exception rather than the rule.

The more controversial the issue, the more difficult it can be to escape blaming as a mode of operation. On issues such as abortion, climate change, fluoridation and vaccination, partisans on either side of the debate are reluctant to admit any weakness in their views because opponents will seize on it as an avenue for attack. Each side becomes defensive, never admitting error while continually seeking to expose the other side’s shortcomings, including pathologies in reasoning and links to groups with vested interests. These sorts of confrontations seem designed to prevent learning from failure. Therefore it is predictable that such debates will continue largely unchanged.

Although the obstacles to learning from failures might seem insurmountable, there is hope. Black Box Thinking is a powerful antidote to complacency, showing what is possible and identifying the key obstacles to change. The book deserves to be read and its lessons taken to heart. A few courageous readers may decide to take a risk and attempt to resist the stampede to blame and instead foster a learning culture.


“The basic proposition of this book is that we have an allergic attitude to failure. We try to avoid it, cover it up and airbrush it from our lives. We have looked at cognitive dissonance, the careful use of euphemisms, anything to divorce us from the pain we feel when we are confronted with the realisation that we have underperformed.” (p. 196)

Brian Martin

A title for your article

The title of an article, book or thesis can make a big difference, so it’s worth spending time and effort to find a good one.


When someone reads your article, what’s the first thing they read? The title of course. In fact, it may be the only thing they read. If it’s boring or off topic, they may not bother looking further. If it sounds intriguing, they may proceed even if it’s not their main area of interest.

In 1973, E. F. Schumacher authored a book presenting ideas about economics, for example concerning production, land, resources, ownership and technology. It became well known in part due to its inspired title: Small is Beautiful.

Rachel Carson’s 1962 book Silent Spring was a best seller and helped launch the modern environmental movement. Carson was a skilled science writer, but even so the title of her book helped make it an icon. Imagine that it had been called instead Pesticides and Living Landscape, the title of a book by Robert L. Rudd that came out a couple of years later and covered much of the same ground. (The publication of Rudd’s book was delayed by opposition from pesticide supporters.)


I’m focusing here on non-fiction. Titles for novels, short stories, plays, poems and musical compositions are also important. However, titles alone aren’t enough: the content is crucial. Furthermore, good work can succeed despite an ordinary title. Some of Beethoven’s compositions have special titles, for the example the Pastoral and the Choral symphonies. However, Symphony #5 is well known without having a descriptive word attached to it. Imagine, though, Agatha Christie’s murder mysteries being titled Detective Novel #1 through to Detective Novel #66.

Agatha Christie

My experience

Somewhere along the line, early in my writing career, I started paying attention to titles. In 1979, I wrote a booklet with the provisional title Activists and the Politics of Technology. Seeking something catchier, I asked some of my environmentalist friends and one said, “Ask David Allworth. He’s good at titles.” So I approached David and gave him some information about my booklet. Before long, he came up with a list of excellent possibilities, and one I loved: Changing the Cogs.

A year later, I had another booklet ready for publication. A descriptive title would have been “A critical analysis of the pro-nuclear views of Sir Ernest Titterton and Sir Philip Baxter.” I forget how, but the title became Nuclear Knights: Titterton and Baxter had been knighted. Alliteration is valuable in a title. The publisher was Rupert Public Interest Movement, at the time campaigning for freedom-of-information legislation. John Wood, a key figure in Rupert, drew a memorable cover graphic showing Baxter and Titterson as Don Quixote and Sancho Panza tilting at windmills. Covers can be as important as titles, but that is another topic.


A few years later, I wrote a book tentatively titled Grassroots Action for Peace. I wracked my mind for something catchier and came up with Uprooting War. Again, my publisher provided an inspired graphic.


On another occasion, I made the mistake of making a title more academic. The title I chose was Scientific Knowledge in Controversy: The Social Dynamics of the Fluoridation Debate. It’s descriptive but not easy to remember. In retrospect, I should have stuck with my original idea, Fluoridation and Power.

For academic works, a common practice is to provide a short attractive main title and a more descriptive subtitle, but if the main title is too general, it can be misleading. For example, the title Power Politics could refer to lots of things, including electricity politics or any number of politicians or political events. You see the title and then discover the subtitle, such as Environmental Activism in South Los Angeles.

I wrote the title of this post as “A title for your article,” but so far I’ve written about books, not articles. Most people write far more articles than books. I could have more accurately titled the post “How to find a good title for your article, book or thesis.” There’s often a trade-off between brevity and descriptiveness. A short title is bound to leave something out. It’s useful to think of a title as a handle, as something that makes it convenient to use. Though brevity is often better, lengthy titles can sometimes be effective. One of my favourites is Barrington Moore Jr’s book Reflections on the Causes of Human Misery and upon Certain Proposals to Eliminate Them.

I’ve written several articles about the debate over the origin of AIDS, looking at the treatment of the theory that the disease entered humans via contaminated polio vaccines used in Africa in the 1950s. One of my articles was about my own involvement in the debate as a social researcher, and I came up with the title “Sticking a needle into science: the case of polio vaccines and the origin of AIDS.” The main title, “Sticking a needle into science,” draws on the imagery of vaccination involving an injection using a needle. Actually, the polio vaccines in question were administered orally, with the vaccine squirted into recipients’ mouths.


Brainstorming titles

At a meeting some years ago with a group of my PhD students, we helped Patrick decide on a title for his thesis, which was nearly ready for submission. Patrick briefly explained that his thesis dealt with methods used by key groups in the debate over climate change. I knew more detail, of course, but most of the others didn’t. I asked everyone to write down at least ten possible titles for Patrick’s thesis as quickly as possible, saying there was a prize for the best title and another for the funniest.

Individual brainstorming can be more productive than the collective form. The point of quickly writing numerous possible titles is to move thinking from the logically oriented left hemisphere of the brain to the more creative right hemisphere. Most scholars need to loosen up to be more creative. Offering a prize for the funniest title helps.


After a while, I called a halt, and everyone gave their lists of titles to Patrick. Not everyone had produced ten possible titles, but some had produced more. Then Patrick read them all out loud, starting with #1 from each list, then #2 from each list, and so on. The best title was judged by Patrick and the funniest title was judged by the most laughter – and there was plenty. The prizes were trinkets or chocolates, more symbolic than substantial.

Patrick ended up titling his thesis using a combination of a couple of the suggestions. It was “Climate conflict: players and tactics in the greenhouse game.”

This technique of generating title ideas has worked well every time I’ve tried it with a group. Sometimes none of the suggested titles is ideal, but the process helps the author to think up something better. Those suggesting titles don’t need to be knowledgeable about the topic. In fact, it can be better if they know only a little, so they are less inhibited by expectations.

There are many considerations to take into account in deciding on a title, including key words for web searches, relevance to readers in the field and beyond, acceptability to editors, and conventions in the genre. The thing I’ve learned is that it’s worthwhile spending a fair bit of time and effort choosing a title, and also worthwhile enlisting others in the task. People read your titles more than anything else you write, so why not make them as good as you can?

Brian Martin

An orchestrated attack on a PhD thesis

Judy Wilyman, an outspoken critic of the Australian government’s vaccination policy, undertook a PhD at the University of Wollongong. She graduated in December 2015.

On 11 January, her PhD thesis was posted on the university’s digital repository, Research Online. On the same day, anticipating an attack on Judy and the thesis, I posted a document titled “Judy Wilyman, PhD: how to understand attacks on a research student“, which turned out to be remarkably accurate in characterising the nature of the attack, which commenced within 24 hours.

The attack included a series of biased articles in The Australian by journalist Kylar Loussikian, numerous hostile blogs and tweets, a one-sided Wikipedia page, and a petition. Never before have I heard of such an outpouring of rage over the award of a PhD in Australia.


As a sociologist, this phenomenon is fascinating in its assumptions and motivations. I am hardly a neutral observer: I was Judy’s principal supervisor at the University of Wollongong, and quite a bit of the outrage has been directed at me, my supervision and my research. On the other hand, I have considerable inside knowledge, enabling insight about the claims being made.

Given the volume of hostile commentary about Judy’s thesis, it is not possible for me to undertake a comprehensive analysis of it in a short time. Therefore my observations here are preliminary. Rather than try to provide detailed evidence to document my generalisations, I merely illustrate them with a few comments made by signers of the petition against the university and the PhD. Down the track, I hope to provide a more detailed response, including to some of the treatments that address matters of substance.

SAVN attacks

The outrage over Judy becoming Dr Wilyman can best be understood by studying the operations of the group now calling itself Stop the Australian (Anti)Vaccination Network or SAVN. Since 2009, SAVN has been attempting to censor and discredit any public criticism of vaccination, using misrepresentation, ridicule, complaints and harassment, as I have documented in a series of articles. SAVN’s agenda has been to cleanse public discourse of dissent about vaccination. Judy Wilyman has been one of SAVN’s many targets.


Judy had been under attack by SAVNers for several years. Therefore, I and others at the University of Wollongong correctly assumed there would be a hostile response to her graduation. Consider two hypotheses for how I and university officials would behave in this situation.

Hypothesis 1. We would push through a sub-standard thesis.

Hypothesis 2. We would take extra care to ensure that the thesis was of requisite quality and that all university processes were followed carefully. This would include sending the thesis to technical experts and choosing external examiners of high standing.

To me, it beggars belief that anyone would believe hypothesis 1, especially given that outsiders lack information about the operation of university processes. Yet in practice it seems that many outsiders, based on limited knowledge, assume that the thesis must be no good, my supervision was inadequate and the university was derelict.

The rush to condemn the thesis and the university can be understood this way: opponents assume it is impossible to undertake a scholarly critique of vaccination policy (or at least impossible for Judy to do so). Therefore, they condemn everyone involved in the process.

Furthermore, opponents do not acknowledge that scholars can differ in their evaluation of evidence and arguments. Instead, in various scientific controversies, including the vaccination debate, dissident experts are subject to attack.


Within media studies, there is a well known and widely discussed view that mass media do not tell people what to think, but are quite influential in determining what people think about. The articles by Kylar Loussikian in The Australian apparently were highly influential in getting a lot of readers to think about Judy Wilyman’s PhD. Their agenda was set by the mass media yet, as noted within agenda-setting research, few readers realised their focus of attention had been so influenced.


Associated with media agenda-setting is the significance of framing, which is about the perspective from which people see an issue. Loussikian’s articles framed the issue as about shortcomings of a PhD thesis and the credibility of the student, the supervisor, the examiners and the university. This frame was adopted by most (though far from all) commentators.

It is an interesting thought experiment to consider the likely response to a differently framed set of articles about the thesis, in which the central issue was an attack on academic freedom by SAVN over a number of years. However, The Australian was unlikely to adopt this frame. Indeed, a couple of years earlier, an Australian journalist had adopted SAVN’s agenda against Judy.

Assumptions about scholarship

Many of the attackers seem to have assumed that scholarship and criticism of vaccination are incompatible. How else could they justify condemning the university? An alternative view is to support current Australian government vaccination policy while accepting that it can be subject to a scholarly critique.


SAVNers for years have proclaimed that there is no debate about vaccination, by which they mean that there are no valid objections to the dominant view. To acknowledge that a scholarly critique is possible is to accept there is something to debate. Apparently this possibility is so threatening that it must be met by denigration and abuse.

Looking at the thesis

In “Judy Wilyman, PhD” I anticipated the sorts of attacks that would be made. This was not difficult: I simply listed the methods that had been used previously. Here’s what I wrote in a section titled “What to look for in criticism”:

When people criticise a research student’s work, it is worth checking for tell-tale signs indicating when these are not genuine concerns about quality and probity but instead part of a campaign to denigrate viewpoints they oppose.

  1. They attack the person, not just their work.
  2. They concentrate on alleged flaws in the work, focusing on small details and ignoring the central points.
  3. They make no comparisons with other students or theses or with standard practice, but rather make criticisms in isolation or according to their own assumed standards.
  4. They assume that findings contrary to what they believe is correct must be wrong or dangerous or both.

The attacks on Judy’s research exhibit every one of these signs. Her opponents attack her as a person, repeatedly express outrage over certain statements she has made while ignoring the central themes in her work, make no reference to academic freedom or standard practice in university procedures, and simply assume that she must be wrong.

My preliminary observation is that most of the hostile commentary about the thesis exhibits one or more of these signs.


There have been numerous derogatory comments made about Judy, me and the university, most without providing any evidence and many based on misrepresentations of the thesis. Proponents of evidence-based medicine might ponder whether it is legitimate to condemn a thesis without reading it, condemn a supervisor without knowing anything about what happened during the supervision process, and condemn a university without having any information about the operation of university procedures. (Tell-tale sign 1)

Some of the opponents of the thesis have referred to comments made by Judy in other contexts. Likewise, questions have been raised about some of my other research. This is the technique of attacking the person in order to discredit their work. (Tell-tale sign 1)

When raising concerns about a piece of research, the normal scholarly route is to send them to the author, inviting a reply, not to immediately publicise them via journalists. An alternative is to submit them to a scholarly journal for publication, in which case many editors would invite the author to reply.

Alleging there are errors in a piece of work does not on its own challenge the central arguments in the work. For this, addressing those arguments directly is necessary. Very few of the critics of Judy’s thesis have addressed any of its central themes. (Tell-tale sign 2)

The intensive scrutiny of Judy’s thesis on its own does not enable a judgement of its quality, because it is necessary to benchmark against other comparable theses. None of her critics has attempted a similarly intensive scrutiny of any other thesis, much less a set of theses large enough to enable a fair assessment of her work. Experienced examiners have assessed many theses, as supervisors and/or examiners, and are well placed to make the required judgements about quality. This is in stark contrast to outside critics, many of whom lack any experience of thesis supervision or examination. (Tell-tale sign 3)

Why is there such a hue and cry over Judy’s thesis? Many theses tackling controversial topics or taking non-standard positions are published every year. Many of the critics of the thesis apparently believe no thesis proposal critical of vaccination should be accepted at an Australian university, and that for such a thesis to be passed necessarily reflects adversely on the university. The thinking behind this seems to be based on the assumption that criticism of Australian government vaccination policy is dangerous and should be censored. (Tell-tale sign 4)

I care. I believe in freedom of thought and speech, however this unscientific bullshit has to stop. It’s endangering lives — Kate Hillard, Broome, Australia

The net effect of these techniques is striking. A group of campaigners, with a well-established agenda of attacking critics of vaccination, sets out to discredit a thesis. Disdaining accepted scholarly means of critique, they feed material to a journalist. They take sentences from the thesis out of context and assert they are wrong, going public before offering the author an opportunity to reply. They ignore the central themes of the thesis. They show no awareness of scholarly expectations in the field, instead asserting the superiority of their own judgements over those of the examiners. Based on this charade of intellectual critique, they then condemn the thesis, the student, the supervisor and the university in an orchestrated campaign.

The role of expertise

SAVNers and quite a few other commentators state or assume that vaccination policy is a scientific issue, rather than one including a complex mixture of science, ethics and politics. These commentators then jump to the conclusion that only scientific experts are qualified to make judgements about vaccination policy. There is a contradiction in their discourse, though, because few of these commentators themselves have relevant scientific expertise, yet they feel entitled to make pronouncements in support of vaccination. So their assumption is that anyone, with relevant credentials or not, can legitimately support vaccination policy but no one without relevant scientific expertise is entitled to criticise it. They ignore the significance of policy expertise.


This is a familiar theme within scientific controversies: critics of the epistemologically dominant view are dismissed because they are not suitably qualified. There is another way to look at policy issues: all citizens should be able to have an input, especially those with a stake in the outcomes. This participatory view about science policy has been well articulated over several decades, but few of those commenting about Australian vaccination policy even seem to recognise it exists.

Many opponents of the thesis and critics of the university have declared this issue is not about academic freedom but about academic standards. This claim would be more convincing if these opponents had ever made scholarly contributions about academic freedom or if they were not making self-interested judgements about their own behaviour. Their actions show their agenda is suppression of dissent.

The SAVN message

What is the implication of SAVN’s campaign against Judy Wilyman? And why do SAVNers and others continue to attack the University of Wollongong despite lacking any concrete evidence of any shortcomings in the university’s processes? There is one underlying message and two audiences. The message is that no university should consider allowing a research student (or at least an outspoken research student) to undertake a study critical of vaccination.

The first audience is the University of Wollongong. The second audience is other universities, which are being warned off critical studies of vaccination, or indeed of any other medical orthodoxy, by the example being set by the attack on the University of Wollongong.

There is also another message, which is along the lines of “Don’t mess with SAVN. We will launch a barrage of abuse, ridicule and complaints, and use our connections with the media and the medical profession, to assail anyone who crosses us.”

The original reason I became involved in the Australian vaccination debate is that I saw SAVN’s agenda as dangerous to free speech. If adopted more widely, SAVN’s approach would stifle discussion on a range of issues.

I am therefore buoyed by the support I’ve received from my colleagues, including senior figures, at the University of Wollongong, who believe in the importance of open debate and of scholarship that challenges conventional wisdom.

It is apparent that academics and universities need to do more to explain what they do and to explain the meaning and significance of academic freedom.


See also my other writings about attacks on Judy and her thesis.

Think freakier

The authors of Freakonomics have now written Think like a Freak. Their stimulating perspective is an invitation to think in even more original ways.

Steven Levitt is an economist at the University of Chicago who became famous for his book Freakonomics, in which he applies logic and mathematics in original ways to both longstanding and novel problems and issues. The book’s co-author, Stephen Dubner, is a writer who can turn dry statistics into page-turning adventures.


One controversial topic covered in Freakonomics was the cause of the decline in the US crime rate in the 1990s. The authors presented the idea that the legalisation of abortion nationwide in the early 1970s led to a significant decline in the birth of children in disadvantaged circumstances, and as a result the crime rate went down 15 to 20 years later. They cite statistics and references to back up this hypothesis. Freakonomics looked also at why teachers cheat, the economics of drug dealing, and fashions in naming children, among other topics. Levitt and Dubner later extended their popular treatments of unorthodox perspectives in SuperFreakonomics. As well, the authors run a blog and a radio programme.

Steven Levitt
Steven Levitt

Because of the huge sales of Freakonomics, it is not surprising that Levitt’s research findings have come under considerable scrutiny, with some data and findings contested. As well, it is debateable whether the topics covered should be considered part of the economics discipline.

Most recently, Levitt and Dubner have written Think like a Freak, aiming to explain their approach by using engaging examples to motivate general comments. This book is my focus here. Learning to think in unorthodox ways can be worthwhile even if the results are sometimes questionable.


In Think like a Freak, the authors tell, for example, of Takeru Kobayashi (nicknamed Kobi), a slightly built Japanese man who became involved in competitions to eat as much as possible in a short time. After some initial successes, he entered the biggest event in the field, Nathan’s Famous Hot Dog Eating Contest in Coney Island, New York. The annual contest involved eating as many hotdogs as possible in 12 minutes. Other competitors followed the then conventional wisdom, which was to train by eating as much as possible. Kobi, though, decided to train in a different way: he experimented with different approaches, for example eating the sausage separately from the bun and soaking the bun in water so it could be swallowed more quickly. Going into the competition, Kobi astounded the field by winning and by smashing the record, eating nearly twice as many hotdogs as the previous highest number.

Takeru Kobayashi, 2006

From this example, Levitt and Dubner highlight a few key points. Kobi didn’t just accept the conventional approaches: he tried out new approaches, tested them and practised them. Another thing was that Kobi focused on how he ate, namely applying the methods he had developed as well as possible rather than comparing his performance to previous efforts by others. In this way he was not held back by the expectation that records can only broken incrementally. Finally, Kobi developed a mental technique, including his focus on process, that allowed him to enjoy the process of gorging himself, despite the pain and discomfort involved.

Levitt and Dubner pursue this path of presenting simple ideas that, when applied in unorthodox ways or to unexpected topics, lead to potential breakthroughs. One chapter is “Think like a child.” Of course they don’t mean always think like a child, but in some circumstances children can cut through conventional ways of seeing the world, conventional for adults that is. A magician friend of the authors told them he was hardly ever caught out by an adult, but quite a few children could see through his tricks, for a variety of reasons: they were less focused and hence harder to distract, they were more attentive to details adults wouldn’t notice, and they were shorter and could see things that adults couldn’t because the tricks were designed to be seen from above.



Levitt and Dubner describe meeting with David Cameron just before he became Britain’s Prime Minister. They pointed out some ways to make the National Health Service more efficient by introducing charges for service, a perspective that comes naturally to an economist. But Cameron switched off: the NHS was not to be tinkered with.

Levitt and Dubner here subscribe to conventional rationality of planning by elites, those who supposedly know best. But there is more to decision-making than rationality. Part of the picture is involving citizens in the decisions that affect them, thereby enabling far better uptake of policies. Cameron instinctively knew he could not implement major NHS reforms, even if he wanted to, without winning over the population. (Incidentally, the US fee-based health system is hardly a model of rationality.)

Levitt and Dubner advocate going to the roots of problems, not just treating symptoms. They tell the now-familiar story of how Barry Marshall and Robin Warren discovered that ulcers are caused not by stress and spicy foods but by a bacterium that can be eliminated by antibiotics. They had to fight the medical establishment for recognition. Marshall and Warren, now Nobel Prize winners, had addressed the cause of ulcers. So far, so good.

Then there is crime, a favourite topic for Levitt and Dubner. In reprising their studies of abortion and crime, they point out that some measures, such as more capital punishment and tighter gun laws, do not reduce the crime rate. They instead prefer to focus on something deeper, children’s upbringing.

There are other ways to look at crime not examined by Levitt and Dubner. One is to point out that nearly all crime appearing in US police statistics is by people at the bottom of the social pyramid. Those who are poor, with less education and few opportunities, are far more likely to commit the sorts of crime that result in arrests and imprisonment. However, available evidence suggests that the biggest criminals are at the top of the social hierarchy, including white-collar crimes by individuals and major crimes by corporations and governments. Pharmaceutical companies, for example, have been fined billions of dollars for crimes leading to the deaths of tens of thousands of people, but few executives are ever called to account. So crime statistics should be treated as an artefact of a class-based approach to criminality: most of the big boys (and girls) can cheat and steal with impunity, while those further down the hierarchy are subject to far greater scrutiny and punishment.

The sociologist Randall Collins wrote an insightful chapter presenting an unfamiliar perspective on crime. He argues that all societies need to define some activities as deviant, and those considered most deviant are criminalised. So crime rates reflect deeper processes of social stratification and exclusion. In this case, thinking like a freak may not get you as far as reading some sociology.

Collins-Sociological Insight

Levitt and Dubner write about a study by Jörg Spenkuch of German Protestants and Catholics that found people living in Protestant areas earned a little more money on average than people living in Catholic areas, although their hourly wages were the same. One factor was that those in Protestant areas worked longer hours. Is the lesson from this, as suggested by Levitt and Dubner, that kids should be encouraged to be more hard-working like Protestants? An alternative lesson is that by working fewer hours, Catholics are increasing their well-being: it is well documented that higher incomes have a minimal impact on happiness compared to spending time with family and friends.

Persuading people

Levitt and Dubner include a useful chapter on how to persuade people who don’t want to be persuaded. They make some useful recommendations. One is to give credit to the other side’s strong points, because an opponent is unlikely to engage in debate with an obviously biased perspective. In studying numerous scientific controversies over the years, my observation is that it is rare for a partisan to give a fair summary of the opponent’s argument. In the Australian vaccination debate, each side presents its strong points and criticises the other side’s weak points. There’s very little persuasion going on.

Another recommendation made by Levitt and Dubner is not to insult the opponents, for example by calling them ignorant, foolish, dupes or crazies. Going by past behaviour, many vaccination partisans won’t be following this advice.

The authors use climate change as an example, pondering the difference between the scientific consensus about the reality of human-induced global warming and the considerable scepticism among the US public. However, they omit one important factor: in the US, there is a powerful fossil-fuel lobby that does everything it can to create doubt about climate science. In many other countries, climate sceptics have low public credibility. So perhaps Levitt and Dubner could make another recommendation: have on your side a powerful and wealthy group that intervenes in the debate.

Stephen Dubner
Stephen Dubner

Levitt and Dubner use a different example to good effect: driverless cars. These are getting better technologically, but to argue for them, they say it is wise to acknowledge possible dangers, for example that a driverless car could plough into a preschool, killing lots of kids. They provide the figures to show that dramatic events, reported in the media, give an unrealistic picture of technological dangers. Cars (with drivers) are the big killer of kids in rich countries, and if driverless cars reduced the road toll even a little, many more kids would be alive and uninjured.

However, there is another way to look at the issue of driverless cars, which is to ask by so many billions of dollars are being devoted to a slight improvement in a transport system that is inherently unsafe, as well as being damaging to the environment. For decades, critics of the car have been advocating for a range of alternatives: walking, cycling, public transport, and design of cities to make walking and cycling safe and attractive. Recognising such alternatives does not require thinking like a freak, but rather being open to possibilities that clash with the powerful road and auto lobby in the US. Thinking about transport like a freak in Copenhagen, where commuting by bicycle is commonplace, would be different than thinking like a freak in Los Angeles.

The final chapter of Think like a Freak is titled “The upside of quitting.” They say that quitting has an unfortunately bad reputation, often being associated with failure. They note that quitting a project, a job or a relationship can have many advantages, but quitting often is not contemplated because of sunk costs and lack of consideration of opportunity costs.


They describe tech companies that try out lots of ideas with the aim of testing them promptly and, if they don’t measure up, quitting without investing a lot of money. It makes sense to spend some time and effort, but no more than necessary, determining whether something is a bad idea.

Levitt and Dubner even set up an online operation that offers to flip a coin for people to make decisions, for example whether to leave a job or a relationship. This has attracted tens of thousands of participants who are asked to report on the outcome of the process. Despite some intriguing outcomes, I have reservations. There is research showing that people systematically misjudge what made them happy in the past and what will make them happy in the future. Indeed, there are several illusions involved in people’s explanations for their current state of mind. So while I sympathise with Levitt and Dubner’s encouragement to see the positives involved in quitting and failure, actually measuring the consequences of choices can be challenging.

Think like a Freak is engaging and informative. It is written as a set of stories, and the authors are well aware that story-telling is a powerful technique for getting a message across. The book concludes with some modest comments.

All we’ve done is encourage you to think a bit differently, a bit harder, a bit more freely. Now it’s your turn! We of course hope you enjoyed this book. But our greatest satisfaction would be if it helps you, even in some small measure, to go out and right some wrong, to ease some burden, or even — if this is your thing — to eat more hot dogs. (p. 211)

Brian Martin

Steven D. Levitt and Stephen J. Dubner, Think like a freak: how to think smarter about almost everything (Penguin, 2015)

Marking blind

When marking an essay, it can be better not to know who wrote it.

As a university teacher, one of my regular tasks is to mark assignments, and I want to be as fair as possible to the students. One method I use is to “mark blind,” namely without knowing the name of the student whose work I’m marking.

anonymous student

Most teachers try to be fair and say that knowing the identity of the student makes no difference to them. However, there’s plenty of research about in-group favouritism, where the in-group can be based on family, religion, age, ethnicity or viewpoints, among other possibilities. Teachers are likely to be affected by all sorts of unconscious bias, including expectations about how good a student is.

Students create impressions on their teachers. Some students are more articulate, engaging, humorous or astute in their comments. Then there are the effects of appearance, dress and demeanour. Maybe a student really tries hard, creating a favourable impression of diligence.

Wine tasters evaluate wines without knowing their origin. Vintages can vary considerably from year to year, so it’s better not to be influenced by previous perceptions. Similarly, the quality of a student’s work can vary from class to class and from assignment to assignment, and teacher expectations can affect evaluations.

Even a student’s name can influence perceptions. Male or female? Ethnic? Pretentious-sounding or ordinary? Stereotypes abound and can influence attitudes. If you know a student well, in-group favouritism is a risk; if not, then stereotype bias is a risk. When marking and knowing the student’s name, it’s therefore likely that some mental image of the student will be present, and it’s likely this has an effect on the marker.


My way of limiting this bias is to mark assignments without knowing the student’s name. I ask students to list only their student number on the assignment, not their name.


I find this changes my attitude while marking. My focus becomes to comment on the work done, with less concern about the relationship of the comments or mark to the student. I don’t worry about a student who comes across well in class doing poorly or about a seemingly lackadaisical student doing well. After finishing marking all the assignments, I go online to recombine student numbers with names, and send my comments to the students.

One good aspect of marking blind is that I can say honestly to students that my mark is on their work, not on them personally. They can be more confident that if they receive a good mark, it is a reflection of good work and likewise that if they receive a poor mark, it is not about who they are.

If only more essays


In recent years, I have had my students submit their work electronically, either directly to me by email or through an online forum. Usually this means I can see their names as well as their student numbers – but only temporarily. I put the submitted files into a folder, with each file having as its name the student’s number. By the time there are a few files in the folder, I have forgotten which one is which.

Sometimes, when marking an assignment, I recognise the student. Perhaps it’s because we had talked about it beforehand, or the student gives some revealing personal detail, for example being from Finland when there’s only one Finnish student in the class. More commonly, though, it’s because the student includes their name somewhere in the assignment.

Whatever the reason, I put that assignment at the bottom of the pile and turn to another one. Usually after marking ten or so assignments, I’m on automatic pilot in terms of applying the assessment criteria, and knowing the student’s name is less important.

Sometimes, when marking an assignment, I think I know which student did it, but if I’m not absolutely sure, it helps me switch focus from who did the work to the quality of the work. I want the mark to be appropriate whoever did the work.

If I want to give feedback specifically for a student, supplementary to my comments on the student’s assignment, I can add this after reconnecting student names to assignments.


The pitfalls of familiarity

There’s an inherent tension in any system in which teachers mark their own students’ work. Teachers in such circumstances have two conflicting roles. One is to provide guidance, support and feedback to assist learning. The other is to provide an assessment of the student’s performance.

The trouble is that the assessment role can inhibit the support role. If students are worried about what mark they are going to get, they may be cautious about exposing their ignorance, thereby reducing opportunities for useful feedback. They may also try to curry favour with their teacher.

The way around this is to separate teaching and assessment roles. This occurs with research students in the Australian and British systems. The supervisor supports the student to produce a satisfactory thesis. Then the thesis is assessed by independent examiners. At the University of Wollongong, there are strict rules to ensure independence. At the PhD level, for example, examiners cannot have worked at the university in the past five years, nor have collaborated with any supervisor or the student, among other restrictions.

In years gone by, supervisors were examiners for their own students’ honours theses, but this was open to abuse, with some supervisors becoming advocates for their favoured students while some unfortunate students, who had clashed with their supervisors, were treated harshly. The rules were changed to prevent supervisors being examiners, though in some parts of the university there was resistance, with supervisors insisting that only they had the expertise to judge their students’ work.

One year, I made an arrangement with a colleague at another university: he would mark the final assignments from my undergraduate class and I’d do the same for him. This enabled me to be a support person for my students, giving them feedback on drafts before marking by my colleague. I thought the system was worthwhile, but it seems that few academics are receptive to this sort of exchange. My colleague never supplied me with the essays from his class. My guess is that he did not feel comfortable relinquishing his control over marks for his students. If, instead of needing to mark the work of 90 students in a semester, the figure was closer to 40, I might try again to arrange an exchange with a colleague or with one of the other tutors in my classes.


Other biases

Blind marking can limit biases due to knowing who did the work, but it doesn’t eliminate other sorts of biases. One of the most common is ideological: if students say things you agree with, they are more likely to create a favourable impression than if they challenge your beliefs. If you’re teaching on topics where there are strong differences in opinion, for example addressing abortion or biotechnology, being fair can be difficult.

There’s another problem too. Students are very sensitive to the views of their teachers, and many students will say what they think their teachers want to hear. This is probably more damaging and insidious than teacher bias itself.

Many years ago, I taught a course on environmental politics and used case studies as a basis for understanding theory. Many of the students were doing an environmental science degree, and most of the students thought of themselves as environmentally conscious, and it was hard to get them to think critically about their own beliefs. When nuclear power was the case study, nearly all students were opposed to it, and few students had the confidence to present pro-nuclear arguments. Furthermore, the students knew I was an opponent of nuclear power.

Then I introduced fluoridation as a case study. Some students asked me during class, “What do you think, Brian?” I’d respond that I was studying the controversy as a social scientist and didn’t have a strong personal opinion. This answer frustrated them: they obviously wanted to know my view so they would know better what to write in their assignments.

Furthermore, there was no standard environmental view about fluoridation, and different class members had different views on fluoridation, leading to more stimulating discussions than on other topics. The students had to think for themselves rather than regurgitate a standard line or say what they thought I wanted to hear.

On just one occasion, I used one of my books as a text. I didn’t like this, because I felt students were inhibited. Personally, I would have liked to hear their criticisms of my ideas, but few students have the confidence to question their teacher’s well-formed views. Basing teaching on your own research means you have greater knowledge, but does it help students learn more effectively?



Fairness is just one consideration when marking. Ultimately, the goal is helping students to learn and to become independent, critical, ethical, self-motivated learners. How to do this is a continual challenge for which there is no single answer. I recommend trying blind marking to see what it’s like and to see how students respond.

Brian Martin

Thanks to Anne Melano and Caroline Colton for useful comments.

Subject outlines illustrating how students can be instructed to submit blinded assignments.
CST228, 2015: see pages 11 and 15
BCM390, 2015: see page 17

See also: Marking essays: making it easier and more fun

Learning: how to do it better

We continue to learn our entire lives. Research shows ways to do it better, but this means changing our habits.


Learning — we do it all the time, when reading messages, hearing the news, starting a new job, and in a host of other circumstances. Then there is formal learning, in classrooms and when studying for assignments.

Most people learn how to learn when they are young, and continue with the same methods for most of their life. What if there are better ways to go about it?

Benedict Carey is a long-time science writer, and since 2004 has written for the New York Times. Gradually, he became interested in research on how people learn, and set out on a quest, contacting leading researchers on learning. He was surprised to find that, according to the latest research, what he had done during high school, long sessions of concentrated attention on study topics, was really not all that effective. In his book How We Learn (Random House, 2014), Carey provides an accessible guide to key practical findings from learning research.


Carey makes his account engaging by telling stories about pioneering researchers who developed ideas taken up later. He then spells out the implications for learners, whether they are in schools, universities, jobs or everyday life.

The spacing effect

Which is better: studying for two hours in one session, or for two sessions of one hour each on two different days? The answer is clear: two separate sessions are better, whether you want to learn facts or skills. This shouldn’t be news. In athletics, where learning techniques make the difference between winning and losing, training is normally spaced out. Runners do not postpone training until the day before the race.

Yet generations of students have crammed for exams and other assignments. As an undergraduate, I stayed up all night on several occasions to write essays. It was the only time in my life that I drank coffee! The trouble with cramming is that nearly everything learned is quickly forgotten. Spacing out study is more efficient: you can learn more in less time and retain it longer.


But what’s the best sort of spacing? If you have two weeks to learn the names of the bones in the body, and want to spend a total of two hours studying, is it better to use two sessions of an hour, twelve sessions of 10 minutes, or some other breakdown? And how should the study sessions be spaced? Should one be just before a test? Or, if long-term retention is the goal, what’s the best option? Carey examines what is known about spacing. In general, more spacing is better, but there is still much to be discovered about the optimum spacing for learning different sorts of material.

The testing effect

If you don’t know anything about a topic – for example, Chinese history in the 1700s – then surely the best way to learn about it is to start studying. Actually, though, you’ll learn more efficiently if you take a test on the material before you start, even though you just guess at the answers. Somehow this primes the mind to pay more attention when you do start studying. This is a really strange research finding.

Educationists commonly talk about two types of assessment. Summative assessment measures learning whereas formative assessment is designed to improve learning. Actually, though, all assessment is formative to some degree: it is a method of learning.


Formal assessment is designed by teachers. But there’s another type of testing: self-testing. When you’re studying, you can test yourself regularly. Or you can try to explain the topic to a friend. Testing yourself can overcome the fluency illusion, in which you have the incorrect belief that you know something because it seems familiar. Carey writes:

These apparently simple attempts to communicate what you’ve learned, to yourself or others, are not merely a form of self-testing, in the conventional sense, but studying – the high-octane kind, 20 to 30 percent more powerful than if you continued sitting on your butt, staring at that outline. Better yet, those exercises will dispel the fluency illusion. They’ll expose what you don’t know, where you’re confused, what you’ve forgotten – and fast. (p. 103)


Many students think they’re learning only when they’re studying. Therefore, it doesn’t matter when they study, even if it’s at the last moment. It’s just necessary to put in enough hours. The spacing effect shows that something can happen in between study sessions: the unconscious mind engages with the material, and you don’t even notice it happening. There’s another aspect to this process, called the incubation or percolation effect.

Here’s the trick. When studying a topic intensely, it’s actually better to interrupt the process before finishing, and leave the mind to chew away at it before the next session. In terms of writing, this means not finishing an essay, but instead leaving it incomplete for the time being.


When a task isn’t complete, the mind won’t let it alone, so in the long run you learn more by being interrupted at odd times while pursuing a task. Carey:

… we should start work on large projects as soon as possible and stop when we get stuck, with the confidence that we are initiating percolation, not quitting. My tendency as a student was always to procrastinate on big research papers and take care of the smaller stuff first. Do the easy reading. Clean the kitchen. Check some things off the to-do list. Then, once I finally sat down to face the big beast, I’d push myself frantically toward the finish line and despair if I didn’t make it.
Quitting before I’m ahead doesn’t put the project to sleep; it keeps it awake. (p. 147)

The incubation effect is used by great creators who bore away at a problem for weeks or months and then take a break – and this is often when the best ideas pop up. The challenge is to trust your own mind and treat interruptions to significant tasks as opportunities rather than sources of worry.


The usual way of learning is to concentrate on a particular task until it is mastered, and then go on to the next task. It sounds logical, but actually there’s a more productive technique, which is to mix up the tasks.

Carey describes the technique of interleaving. Here’s a typical research protocol. One group of students learned artistic styles by looking first at six paintings by one artist, say Braque, and then six by another, say Mylrea, and so on through twelve artists. A different group of students saw exactly the same paintings for the same length of time, but mixed up in a random sequence. At the end, students in each group were shown paintings they had not seen before and asked to name the artist. Which group did better? It was the ones who saw the paintings in a random order.

This outcome has been reproduced in numerous studies involving discrimination. During the learning phase, students exposed to interleaving don’t feel like they are learning, but actually they improve faster.


“That may be the most astounding thing about this technique,” said John Dunlosky, a psychologist at Kent State University, who has shown that interleaving accelerates our ability to distinguish between bird species. “People don’t believe it, even after you show them they’ve done better.”
This much is clear: The mixing of items, skills, or concepts during practice, over the longer term, seems to help us not only see the distinctions between them but also to achieve a clearer grasp of each one individually. The hardest part is abandoning our primal faith in repetition. (p. 164)

Athletic coaches long ago figured out that exercising a particular muscle too much at a time is not productive, so they mix up training, switching between different muscle groups. The studies of learning artistic styles show that mixing things up is a more general learning strategy, with applications in many areas.

Other factors

Carey also discuses other factors that enable faster and longer-lasting learning. These include perceptual learning, which happens without having to think about it, and the role of different sleep cycles in consolidating learning.

Sleep helps to form memories.

It is fascinating that there are ways to speed up learning in a wide range of contexts, for example pilots comprehending the implications of different instrument panels or language students learning Mandarin.

It is tempting to think that it would be possible to take advantage of several of the techniques described by Carey and quickly become a much more efficient learner. If you are in the hands of one of the researchers or skilled practitioners using one of the techniques, such as interleaving or perceptual learning, then you have an advantage. But to take the initiative to adopt these techniques on your own is another matter.

One of the key considerations is habit — and many people’s learning habits are deeply entrenched. It can be quite challenging to replace one habit with another, though there is good research on how to do this.

To better understand the challenges of adopting some of the techniques presented by Carey, here I’ll discuss how they relate to the high-output writing programme I’ve been using for several years.


Robert Boice, a psychologist and educational researcher, addressed the problem of low research productivity. Many of his important studies date from the 1980s.

Robert Boice

He observed newly appointed academics and noticed that most of them struggled in the demands of the job, but a few were highly productive in research and furthermore were less stressed than their colleagues. Boice thought the techniques used by these productive new academics might be taught to others, and he showed how this could be done.

Advice for new faculty members

Boice’s approach was elaborated by Tara Gray and turned into a twelve-step programme. The core of the approach is doing some writing every day or nearly every day, but not too much. Boice advocated stopping while still fresh, in order to have energy and enthusiasm to continue the next day. A central theme in Boice’s approach is moderation, to overcome the syndrome of procrastination and bingeing.

Gray says to start writing from the very beginning of a research project. For example, in doing a PhD, you should start writing the first day, rather than spending a couple of years first reading and collecting data. The slogan here is “write before you’re ready.”


How does the Boice-Gray approach to writing measure up in relation to the techniques described by Carey that enhance learning? First is the spacing effect: it’s more productive to space out learning sessions. That is actually the foundation of the writing programme: it is designed to overcome the usual approach of procrastination and bingeing.

Second is the testing effect: it is productive to use testing as a form of studying. In the writing programme, daily writing is done without looking at texts or stopping to look up references. You might have a few dot-point notes, but otherwise everything has to come from your head. In effect, it is a type of testing of your memory of what you want to say. For example, if you’ve read some articles the previous day, you write about them without consulting them: it’s a test, and a powerful learning tool.

Third is incubation. This is central to the writing programme. In between writing sessions, the unconscious mind is going over what to say next. In one of Boice’s studies, he looked at the number of creative ideas produced by academics in three conditions: no writing, normal writing (bingeing) and daily writing. No writing was worst for generating new ideas, normal writing was twice as good and daily writing was five times as good. The writing programme might be seen as turning the incubation process into a routine.

Another facet of incubation is that you learn more when you interrupt your study before finishing. This happens every day in the Boice-Gray programme, and can be enhanced by a simple technique. At the end of your daily writing session, finish in the middle of developing an idea, perhaps even in the middle of a paragraph or sentence. This incomplete expression of an idea serves to stimulate thinking, and often by the next day your unconscious mind has come up with a way to complete the thought.

Tara Gray

Fourth is interleaving: learning about a range of different topics at the same session. This is not usually part of the writing programme, but could be incorporated into it. Usually I write about the same topic from one day to the next, gradually writing the draft of an article or chapter. But sometimes I feel a bit stuck and switch to a different project and topic, coming back to the other one when I feel ready, which can be days, weeks or months later. No doubt interleaving can be used in other ways to improve writing productivity.

Fifth is mixing up learning contexts: you can consolidate your learning by studying in different surroundings and times of the day. The idea is to embed your learning in different environments. This is different from what’s usually recommended in the writing programme, which is to have a routine and stick with it. I think this difference points to an important factor not addressed by Carey: how to motivate continued effort at learning.

The practice of doing just a small amount of daily writing is designed to reduce the barriers to beginning a session. To add pressure, Boice asked academics to report to him weekly with a log of the minutes they had written each day and the number of words they had produced each day. This accountability process made a huge difference. Daily writing combined with reporting a weekly log to Boice improved productivity by a factor of nine compared to the usual procrastination-bingeing approach.

The technique of varying the learning contexts is worthwhile if your writing habit is well established. But few writers seem to have such a solid habit. Writing while travelling would seem like an ideal opportunity to vary contexts, but Gray reports that when travelling, away from the usual routine, writing at all is a challenge for her, and many others have told me the same.



The message here is that the techniques described by Carey are highly worthwhile and should be investigated by anyone for whom learning is important. However, a key consideration is how to turn a new learning approach into a habit. If you can do this, you’ve truly learned something worthwhile.

Benedict Carey

Meanwhile, generations of students are carrying on in their usual approach, and so does most teaching. There is important research being done on learning, and Carey has pointed to some of the most practical findings. When these will affect schools and training programmes is another matter. Not soon, I suspect. So read How We Learn, pick one or two techniques relevant to your needs, and become a more efficient learner – and enjoy it too!

Brian Martin

Thanks to Don Eldridge for helpful comments.

Open access dilemmas

Open access publishing is coming, but the scene is complicated and up-and-coming academics face difficult decisions.


Commercial publishers of academic journals seem to have a good thing going. Academics write the articles, but are not paid for them. Other academics serve as referees; they are not paid either. Editors manage the process; they might receive some support from their universities. After articles are published, academic libraries pay for them.

Academic institutions, most of them supported by governments, provide the money for writing, refereeing and editing articles, and then for libraries, serving academic readers, to buy back the published articles.


So what do the commercial publishers do? They might provide some copyediting, but mainly they extract exorbitant profits from their monopoly position. This has become ever more inefficient with the rise of electronic publishing. Many journals do not print hard copies. Few individuals subscribe to major academic journals and receive printed copies. Online access is the standard option.

Meanwhile, anyone outside universities is disenfranchised. Buying a single article of a few pages might cost US$30 or more.

The inefficiencies, exploitation and absurdities of the academic journal market have led to the rise of the open access movement, with the goal of ensuring that all academic work is available to anyone at no cost. The push for open access (OA) is having an impact, but at the moment the whole area is increasingly complicated.

One model, called gold OA, involves the publisher making articles free online immediately on publication. However, commercial publishers want to make money, naturally enough, so they are adopting various methods. The most common is to require authors, or their institutions, to pay a fee for gold OA. This might be US$3000 or so. It’s a disincentive for anyone who does not have institutional support.

Another model, called green OA, involves authors putting the final pre-publication versions of their articles online, usually in an institutional repository. This gives access, but for those who want to obtain the publisher’s pdf version, access through a library is usually required.

The trouble with these models is that the large commercial publishers are still extracting super-profits due to their monopoly control. The reason is that the market for academic journals is not truly competitive.


In principle, academic authors could choose to publish wherever they like. If journal A is slow and expensive, then go to journal B that is quick and provides free gold OA. The trouble is that journals have reputations, and academics are judged as much or more by where they publish as by the quality of their articles. You can write brilliant articles but if you publish them in low-status journals, your work will not be treated as seriously by fellow academics. Most of the new OA journals have not had sufficient time to develop reputations.

For an academic who is no longer seeking grants or promotions, there is no need to publish in journals that are high status or high impact: more important might be getting to receptive audiences who actually want to read the article. That might be a high-status journal in some cases and a lesser ranked outlet in others.

But such academics are the exception. Most, especially in early stages in their careers, need to worry about the impact of publications on their curriculum vitae: their most important audience is not those who actually read their articles but members of job, promotion and grant committees who read their applications. A few of these “readers” may occasionally read articles to assess their quality and importance, but many instead use the proxy measure of the status or impact of the journals in which articles are published.

This emphasis on the status of outlets is exacerbated by some organisational, disciplinary or national research evaluation schemes. The government scheme called Excellence in Research for Australia (ERA) initially provided a rating of scholarly journals (A*, A, B, C and non-ranked), and universities were assessed based on outputs using these ratings. The system had the perverse effect of penalising publication in lower-rated journals. A scholar who published four articles in A* journals helped the university’s score more than one who published four articles in A* journals plus four more in C journals. Although the journal ratings were later withdrawn, they continue to play a post-death role within universities: academics going for promotion often identify the “former ERA rating” of the journals in which they have been published. Few bother to identify the OA status of the journals.

Academics who care about both access and advancement are thus caught in a cruel dilemma. They can choose to publish only in high-status journals, maximising their career prospects while usually supporting the big commercial publishers, or they can support newer free OA journals but possibly with a cost to their academic prestige. Are there other options? And what are the prospects for the future?

Research on OA

I obtained a taste of the developments and complexities in this area by reading a lengthy document titled Open Access Publishing: A Literature Review. It was written by Giancarlo F. Frosio for a British research centre with the acronym CREATe; he has since moved to Stanford Law School.

Open Access Publishing is far more than a literature review, being instead an impressive book-length discourse and state-of-the-art assessment of OA. It includes an historical treatment of the development of publishing and copyright, coverage of a range of theories concerning copyright and OA, and a detailed assessment of OA models for publishing and for organisational policies.

Giancarlo Frosio - Resident Fellow - Intermediary Liability
Giancarlo Frosio

The history of copyright is worth studying. While it once might have made sense to provide incentives for creative work, the duration of copyright has expanded seemingly without bounds. Five or ten years of copyright protection might encourage an author to be more productive, but few authors will work harder still because copyright is extended to 70 years after their death. Currently there is perpetual copyright on the instalment plan, with extensions made whenever Mickey Mouse is about to go out of copyright. This means that those who control copyrights are extracting money based on monopoly privilege. This makes even less sense for academic publications, because most scholars sign away their rights and receive no royalties for journal articles.

My impression, after reading Frosio’s review, is that the field of academic publishing is in a state of flux, buzzing with a bewildering set of options and challenges. The central driving force in this complexity is the attempt by commercial publishers to maintain a central role in the publication process despite the fact that they serve little practical purpose, given the existence of OA models.

The OA movement has made great strides. Compared to a decade ago, vastly more universities have online repositories and policies to encourage authors to make their publications available through them. There are many more OA journals, some with high prestige. More government agencies are mandating OA for all publications in relevant areas.

Nevertheless, there are problems. The move to OA is not nearly as rapid as proponents had hoped, in part due to tactics used by publishers but even more due to the scholarly prestige system, with its incentives for publishing in the “best” journals.

For books, OA options are less advanced. Few publishers allow authors to post book images online, even decades after publication, when no more hard copies are being sold. Few authors go to the trouble of putting pre-publication versions of their books online.

Yet with current technology, it is extremely simple to publish OA books with little or no cost. After producing a pdf of the book — something fairly easy to do with word processors — it can be provided free online. Furthermore, there are services such as through which print-on-demand hard copies can be produced and sold at a moderate cost to the buyer and no cost at all to the author or publisher. Consider an esoteric scholarly tome that might sell 50 copies if produced by a commercial publisher. Why would any publisher take it on with such low sales, except at an exorbitant price? The same tome can be made free online and available for sale via print-on-demand for close to zero cost, and will probably receive far more readers from around the world.

Many publishers now make electronic versions of books available, but at a cost that restricts sales mainly to libraries. This disenfranchises those without free electronic access, though they can still read many pages via Google Books. The main reason why the majority of academics have not endorsed OA book publishing options is that they want their books published by publishers with high status.

publisher profits
Source: Alex Holcombe’s blog

Whose interests are being served?

Arguments for OA often appeal to self-interest or collective interest. For example, academics are encouraged to put their articles on institutional repositories or publish in OA journals because this will increase their visibility, readership and citations. Institutions are encouraged to adopt OA mandate policies to make scholarly work available to those with less money, including both academics in less well-funded institutions and members of the general public. Advocates of OA argue that costs will be reduced, taxpayer money used more efficiently (rather than being diverted to publishers) and universities seen as more accountable.

The usual arguments for OA can be taken a step further by asking additional questions about scholarly publication. OA means that research is available at little or no cost to readers, including students, other researchers and the general public. However, access is only one factor in making research useful to others.

One key element is understandability. Most academic writing is turgid, dense and filled with jargon, so much so that no one is likely to be interested in reading it except perhaps other academics in the same field, and even they usually prefer a more approachable style.


The usual academic writing style is promoted through the expectations of editors and referees: a submission using colloquial language and an engaging style of writing is more likely to be rejected as superficial even when the content is exactly the same. Opaque writing styles serve to exclude those from other fields and maintain a mystique of insider knowledge.

Given the low cost of online publishing, constraints of length no longer have much relevance. Hence, greater consideration could be given to making scholarly writing accessible to wider audiences, by changing the expected style of regular articles or by offering a supplementary exposition for non-experts. Authors who did this could expect to attract a wider non-specialist readership, with the potential of greater cross-disciplinary collaboration and engagement with practitioners and users. Highly technical papers might be supplemented by explanations of the context and significance of the work for wider audiences.

Open access might make some contribution towards greater understandability. Authors whose work is freely available potentially speak to two audiences: specialists in their fields and interested non-specialists. The response of non-specialists is becoming more important in terms of impact, so some authors will be encouraged to write for this wider audience, just as more scholars are setting up blogs.

OA also provides an incentive for higher quality in research. This is most obvious in open post-publication peer review, in which comments can be made on articles after publication. Even without this sort of review, immediate availability of publications can temper the tendency to hype research results. If a media release makes a claim about helping cure cancer, interested readers can check the research article for confirmation, and also check whether its abstract correctly summarises the findings in the body of the paper.

The process of public scrutiny can be uncomfortable for authors, especially given the nastiness of much online commentary. Moderating of published comments seems essential, but it takes time and effort.


The Internet is making possible a revolution in publishing, in which a much wider range of individuals can contribute to scholarship and public debate in a variety of ways. OA is one facet of this revolution. However, there is considerable resistance to full adoption of OA. Publishers are making huge profits through their intermediary role, though it is becoming ever more irrelevant. The other major obstacle to change is the self-interest of researchers, who are driven by the quest for status. As Frosio writes, “the academic reward system continues to be a major obstacle for gold OAP [OA publishing]” (p. 161). Those who care about scholarship and about public participation need to be involved to help push developments in productive directions.

Brian Martin

Giancarlo F. Frosio, Open Access Publishing: A Literature Review, CREATe Working Paper 2014/1,


Thanks to Michael Organ for useful comments.

The benefits of face-to-face

Relationships can be highly beneficial in people’s lives. For best outcomes, they need to be face-to-face.


In the past 20 years, there has been a boom in research on happiness, sometimes called wellbeing or flourishing. A range of behaviours and mental patterns have been shown to improve happiness, including being physically active, expressing gratitude, being optimistic, helping others and being forgiving. For some of these topics, authors have written entire books explaining the research and its implications.

Among all the methods of improving happiness, one of the most often cited is relationships. Research shows that positive interactions with others can make a huge difference to people’s lives. This includes family members, friends, neighbours, co-workers and many others, even extending to casual acquaintances and people met in commercial contexts, such as hairdressers and salespeople.

A few years ago, in the happiness course run by Chris Barker and me, the vagaries of timetabling meant that part way through one of my classes, my students and I had to walk across campus to get from one classroom to another. We carried out observations and  informal interventions during these walks. One of them was to observe the other walkers we saw on the way, and notice whether they were smiling or otherwise seemed happy. It was striking that those walking and talking in groups nearly always seemed happier than those walking alone.

If this topic interests you, I recommend Susan Pinker’s new book surveying research on relationships, titled The Village Effect. The subtitle gives a convenient summary of the main themes: How face-to-face contact can make us healthier, happier, and smarter. She provides a wealth of examples, case studies, findings and patterns to make the case for the benefits of personal relationships.


She tells about communities in the mountainous regions of Sardinia, where life is traditional and exacting, where people have rich personal connections and where they live far longer than would be expected going by other lifestyle factors such as diet. Pinker uses this as an extended example, also citing much other research on the effect of relationships on longevity and physical health.

The Sardinians are an exception, for they have maintained traditional patterns of village life in the face of incentives to “join the modern world.” There is a deep irony in aspects of contemporary economies. Higher standards of living can improve happiness, but also undermine it.

The irony is that most people want greater happiness, yet the way they go about it can undermine it. An example is seeking a higher income. There is plenty of research showing that, above a certain level, greater income and more possessions make only a marginal difference to wellbeing, certainly far less than alternatives such as expressing gratitude or being mindful. Yet many people, in search of improved happiness, will take on a second job or move to another city at the expense of time with their family and friends.

Face-to-face versus screens

In the past few decades, there has been a big shift from face-to-face interactions to digital connections using email, texts, Facebook and host of other platforms, not to mention the long-standing attraction of television, partly supplanted by video games. It might seem that social media, because they are interactive, are superior to the mass media of radio and television. Pinker quotes research about the advantages of face-to-face contact compared to digital contact.

The irony is that parents who spend their hard-earned cash on gadgets so their children will have immediate access to communication networks may also be facilitating their girls’ feelings of social exclusion. Girls with televisions, computers, and cellphones in their rooms, for example, sleep less, have more undesirable friends (according to their parents), and are the least likely to get together with their real buddies face-to-face. Yet, according to this study too, it is exactly these face-to-face interactions that are most tightly linked to feeling happy and socially at ease. If North American girls spend an average of almost seven hours a day using various media and their face-to-face social interactions average about two hours a day … then many girls are spending most of their spare time on activities that make them feel excluded and unhappy. (pp. 163–164)


Such findings have significant implications in a range of areas. Children are especially in need of personal interaction to stimulate their developing minds, yet digital tools are proliferating and being used at ever younger ages. When it comes to formal education, face-to-face contact with teachers turns out to be crucial. Investments in better teachers appear to be far better for improving learning outcomes than investments in advanced technology.

Many Australian universities, being squeezed for cash, have cut back on class contact. Small tutorials, with maximum interaction between teachers and students, are made larger, and sometimes tutorials are abandoned in favour of lectures, or replaced by online interactions. Evidence cited by Pinker suggests that it would be better to get rid of the lectures and retain the tutorials — at least if learning is the goal.

For example, in one study, almost a million US students in grades 5 to 8 were surveyed about media use, while their school results were monitored. “With the advent of home computers, the students’ reading, writing, and math scores dropped, and they remained low for as long as the researchers kept tabs on them.” (p. 190)

Susan Pinker

Is there any alternative?

Given that there are numerous ways to improve happiness, are relationships really so fundamental? There may be some loners who can be perfectly happy because they are great meditators or have found an activity that provides a satisfying experience of immersive involvement. Surely they can be happy with low levels of face-to-face contact.

Pinker addresses this, for example noting that although people on the autism spectrum have very poor relationship skills, they can still benefit from improving those skills and interacting more. However, I would not assume this is essential. No doubt even the most ungrateful person can become happier by becoming better at expressing thanks, but this is not the only way to become happier.

More generally, Pinker devotes a chapter to the negative aspects of relationships. Face-to-face connections can be highly damaging in some contexts, with fraudsters taking advantage of the trust engendered by social similarity.

Pinker’s overall message is to try to maintain face-to-face connections. Talk to the colleague in the next office rather than sending an email; take time to visit friends; have meals with family members, in the same room!


However, wider trends are working in an opposite directions. Individuals can improve their own lives by building their personal connections, but must do this in the face of the relentless encouragement to use digital media and to pursue careers at the expense of time with friends and family.

Technology to the rescue?

According to Pinker’s argument, much of the decline in face-to-face interaction is due to displacement by technology, especially the ever-present screens in people’s lives. So technology, while making interaction at a distance far easier, is reducing something valuable.

For me, there remain further questions: are some sorts of technologically-mediated interaction considerably better than others, and could future media simulate being in a room with someone?

The loss of personal connection accelerated with the rise of television, so people watched screens with which they had no interaction. Watching television with others in the room offers the possibility of some live discussion, but it is increasingly common for each member of a household to have their own screen in their own room.

The telephone offers a far more interactive experience. Voices are incredibly rich with meanings independent of the words spoken, so there can be a personal connection at a distance, though visual and tactile dimensions are missing.


Texting and email are more abstract forms of interaction — but at least they are personal, unlike television. Prior to email, people used to write letters, which include a tactile component, and a personal one when handwritten. But letters took a long time to arrive compared to a text. How do these media compare?

Then there is Skype, providing an aural and visual interaction much richer than either telephone or writing. Does it partially substitute for the real thing?


The next stage is virtual reality, in which avatars interact with each other in realistic simulated three-dimensional spaces. Virtual reality technology is available today, but not widely used to mimic face-to-face interactions. In principle, it could eventually simulate nearly every aspect of human contact, even including touch and smell. It will never be exactly like physical presence, but will realistic simulation compensate? Not if people aren’t honest about themselves. Pinker cites research on online dating showing that 80% of people misrepresent their age, weight, height, appearance, income or other attributes.

Rather than look to technology to solve a problem exacerbated by technology, the alternative is to reassert the importance of physical presence. Pinker notes that affluent parents are now giving their children the advantage of schools and teachers with more personal interaction.

There is a certain irony in efforts to recreate the benefits of face-to-face interaction. Many of the poor people in the world live in extended families and in small communities where there are numerous routine personal interactions. They have the benefits of what Pinker calls “the village effect.” Do they have to pass through an isolating development transition, or are there ways to “develop” that maintain the advantages of face-to-face?

Brian Martin

Thanks to Don Eldridge for valuable comments.