Are book reviewers softer than movie reviewers?

I was intrigued last November by this Marginal Revolution post on why music reviews tend to be positive, compared to film reviews. I was particularly interested in a comment comparing film and book reviews:

Book reviews are generally positive because reviewers frequently have a choice of which book of several to review, and choose to read books they expect to like, and then to give publicity to ones they enjoyed (knowing that there’s no such thing as bad publicity). It seems like there is a three-step process:
1. Is this book likely to be worth my time?
2. Read it. Is this book worth writing about?
3. Write a review.
Movie reviews basically have to cover all releases in a week, so there is no such filtering out of bad products.

My perception is that book reviews tend to be softer than movie reviews, but I had a different theory: that for social reasons people in the relatively small literary community are reluctant to give negative reviews to people they are likely to meet, if they don’t know them already. As the vast majority of reviewed films are foreign, this is less of a problem in movie reviewing.

A friend who is a part-time literary critic made a related point, that many book reviewers are actual or aspiring book writers themselves, while few movie reviewers have made or are likely to make a film. There are several reasons why this may lead book authors to be softer reviewers: they don’t want to provoke negative reviews of their own work, compared to a critic less of their reputation with readers relies on providing good advice to book buyers (which could lead to lower quality of reviewing overall, not just in being soft), and having been through the pain of writing themselves may just feel sorry for authors, even if the book isn’t much good.

Having an empirical mindset, I decided to try to test these theories by monitoring film and book reviews in the Weekend Australian, the Sydney Morning Herald, and At the Movies (deleting David Stratton overlaps). I have done this since 17 January, concluding this weekend. This gives me a sample of 105 book reviews and 60 film reviews, enough for some initial testing of the theories.

An initial methodological problem was that while film reviews have a convenient 5 star system, this practice is not used for book reviews (why not? another curious difference between the two reviewing systems). So I had to attribute stars based on my reading of the review. When the superlatives were flowing I gave 4 or above, when the review was broadly positive but not enthusiastic I gave 3 or 3.5, when it was lukewarm I gave 2.5, and when the reviewer clearly did not like the book I gave it two or below. Mostly this wasn’t too hard to do, but occasionally reviewing practices such as barely mentioning the book under review while discussing the general topic of the book made this difficult. It is hard to know whether this is a tactic to avoid giving a negative review, or the reviewer is so vain that he believes we would rather hear his views on the topic than the author’s.

The results do show the pattern I expected, but not nearly as strongly. On average, book reviews averaged 3.5 and movie reviews averaged 3.4. Among the book reviews, people who were writers themselves averaged 3.6, while people I categorised as critics averaged 3.4 (I could not classify all reviewers into either category, and classified people as critics where I knew they were regular reviewers, their biographical note said they were reviewers, or they were academics).

At the Movies averaged a 3.3 review, while the newspapers which run fewer reviews averaged 3.5 – consistent with the theory that the more reviews you have to run each week the more likely you are to cover weaker films. The SMH, for example, reviews only two films a week in its Saturday edition, and gave no ‘fail’ marks and only one 2.5 review – absurdly in my view to Clint Eastwood’s excellent Gran Torino, which they probably thought would be good.

To do this properly would require a much larger sample and at least two people starring the book reviews. But overall it suggests to me that despite some structural issues that count against book reviewing being conducted in the interests of book readers, the profession is not as corrupted by structural problems as I thought it might be.

7 thoughts on “Are book reviewers softer than movie reviewers?

  1. I’ve started a film review blog (click my link) where most of the movies get a good rating because I don’t have to see and write about everything going around. Saw Gran Torino but didn’t review it because I found it disappointing. Very little new ground.

    When you usually pay to go to the cinema then a selection process is essential and hopefully works reasonably well.

    These days I often don’t finish books that I would give a poor review. It’s a bit like walking out of the cinema which we never do.

    Like

  2. “Saw Gran Torino but didn’t review it because I found it disappointing. Very little new ground. ”

    I admit I see a small minority of annual film releases, but it is a long time since I have seen ethnic prejudice dealt with as anything other than a simple morality tale, and certainly not with the humour in this film.

    For another study: whether book reviews are as polarised as movie reviews can be (Kevin R liked the Benjamin Button film, but despite the interesting original idea I would give it 2.5, not the 4 Kevin gave it. We agree on The Class though).

    Like

  3. To do this properly would require a much larger sample

    Andrew, I suspect that you understate the methodological challenges, and to get meaningful results you would not only need a bigger sample, but would also have to segment the sample.

    Nowdays I pay scant attention to the book reviews in Australian papers, but they certainly used to range all over the publishing market, from expensive years-in-the-making historical tomes on the European discovery of the Pacific to the latest James Patterson or John Grisham potboilers.

    I can imagine that the temptation to temper criticism would be much weaker, when faced with the latest Grisham or Patterson, than when the reviewer and the author are both members of the some small academic speciality that only contains about half a dozen people in the whole country.

    You also have to make some allowance for academic factionalism and the sort of long-time feuds that can exist in relatively small communities, which could skew the results for certain books in the opposite direction.

    Years ago I was in the position of a fly-on-the-wall to a startup academic journal. In that case the veteran academics in the department, with long memories of and a keen interest in who had fought with and/or slept with whoelse working in the field, were assiduous in steering the neophyte editors away from impolitic choices of reviewer. But not every academic editor has been so fair-minded, and in the newspapers one might expect somewhat looser standards to apply.

    Like

  4. Alan – Given there is no easy measure of an ‘objective’ book or film review I doubt this issue can be conclusively resolved.

    Different kinds of publications will also approach reviewing in different ways. For the kind of academic publication you are talking about, a book is likely to be assessed on its contribution to the discipline, and reviewed by someone with acknowledged expertise – the downside of which is that they are likely to have preconceived views on the topic that may lead them to over-rate or under-rate the book, depending on how close the book is to their preconceptions. They are unlikely to put readability high on their list of considerations.

    For newspapers, few academic books are reviewed, and few academic reviewers used. The general critic is probably going to put more emphasis on the book’s overall literary qualities than its originality or otherwise. They may not know enough to realise the book is a rehash, and for the general reader this may not matter.

    Like

  5. Most reviews of non-fiction appear to be along party lines and reveal more than anything else the prejudices of the reviewer. This came across strongly in a column that Stephen Matchett used to write, reviewing the book reviews.

    Like

  6. Maybe book reviews are softer, or maybe it’s just that, on average, reviewed books are better than reviewed movies. This could be partly for the reasons provided in the Marginal Revolution post, but it could also be that (at least literature-style) books have to be better than (arthouse) movies, if (as I consider is likely) book readers are on average more intelligent and mature than movie viewers. So a reviewer of both media, of given (high) intelligence and maturity, is likely to find the average book more rewarding than the average movie. Drawing a dodgy analogy with higher ed, I’ve often noticed that GR8 students rate their institution poorly on teaching, etc. But how would they rate second or third-tier institutions if they were forced to attend them?

    Like

  7. I think Australian reviews of Australian books really are delicate – the community is small enough that you can easily make lifelong enemies who can do you real damage.

    Australian reviews of Australian films suffer from the same effect a bit, but the film community is far less catty than the writing one. Though some reviewers seem to think their job to flog the local product rather than tell their readers/viewers whether it is worth forking out money to see the flick.

    Like

Leave a comment