Previous Next

Double Disillusion

5

National Polls, Marginal Seats and Campaign Effects

Murray Goot

The most widely remarked feature of the national polls for the House of Representatives was how close their final two-party preferred figures were to the two-party preferred vote. Given anxieties about the validity of the pollsters’ sampling frames and falling response rates, the poor performance of the polls in Britain and the media’s increasing inability to invest large sums in polls (thanks to a decline in revenue from both advertisers and audiences; see Carson and McNair, Chapter 19, this volume; Chen, Chapter 20, this volume), the accuracy of the polls, on this measure, generated relief all around. Based on the two-party preferred vote—the share of the vote going to the Labor and non-Labor parties after the distribution of minor party and Independent preferences—the polls throughout the campaign were able to anticipate a ‘close’ result. But, however ‘close’, even a 50–50 two-party preferred meant that the Coalition was more likely than Labor to form a government.

Less prominently discussed than the two-party preferred vote were the estimates of the major parties’ first preference votes. Here, the polls were less accurate. While the spread of two-party preferred results—from the most favourable to the Coalition to the least favourable—reported by the polls was 2 percentage points, the spread of first preferences for the Liberal–National parties, for Labor, for the Greens and for Others was 3 percentage points. Among journalists and others, the failure of most of the polls to report the level of support for any of the minor parties other than the Greens—only two organisations estimated the level of support in the House for the Nick Xenophon Team—passed without notice. The failure of the media to commission a single poll on the Senate also passed without remark.

Unremarked as well was the quality of the polling in individual seats—some of them ‘marginal’, some of them not—all selected on the grounds that they might change hands. In what loomed as an election too close to call, it was the battles over these seats that were seen as likely to determine the outcome. However, the polls in these seats proved much less reliable than the national polls. It was not just the fact that they were automated opinion polling system telephone calls (‘robo polls’), which have problems capturing younger voters, that was the problem. It was a series of other problems, not least the fact that, as usual, their samples were too small. And, for the most part, the pollsters had no other pollsters’ results with which to compare their own to see whether or not they were outliers or in line.

One of the remarkable things about the national two-party preferred was how close the final polls were not only to the actual two-party result but to each other. This might be taken as a sign that the pollsters’ various methods proved equally good. But it also might raise concerns about pollsters’ adjusting their results so that they are in line with those of others; statistically, the chances of all the polls saying exactly the same thing are not great. In marginal seats, usually polled by no more than one or two pollsters at quite different times, the opportunities for ‘herding’ are minimal.

Predicting the House vote nationwide

All five polls taken in the last few days of the campaign got within 0.6 percentage points of the Coalition’s two-party preferred (50.4 per cent); Ipsos came this close by following the 2013 distribution of preferences used by most of the other pollsters, but not as close when it followed the preferences of its respondents. Viewed historically, the polls produced an exceptionally good result (see Goot 2012: 95, 106; 2015: 129). Especially reassuring for the industry’s public relations was the fact that the pollsters, having largely moved away from the use of landlines—a development lost on some observers (see Errington and van Onselen 2016: 118)—produced good results using a variety of modes. Essential Media used ‘an incentivised online panel, quota sampled and weighted to correct for the known political bias of the panel’. Galaxy and Newspoll combined samples drawn from online panels (about 50 per cent for Galaxy, 60–65 per cent for Newspoll) with samples generated by random digit dialling using interactive voice recognition (IVR) or robo-polling, for which the use of quotas is not possible. Ipsos stuck by the traditional telephone method with interviewers using computer-assisted telephone interviewing (CATI) to contact respondents on landlines (70 per cent) or mobiles (30 per cent). ReachTEL used robos (Ipsos 2016; Lewis 2016; Briggs, pers. comm., 29 August 2016). Two other firms also conducted nationwide polls but stopped early: one, Roy Morgan Research, which combined SMS texting (for roughly half the sample) with face-to-face interviewing for the other half, put its polls behind a paywall at the end of May; the other, Research Now, conducted an online poll in early June for the Australia Institute, which released it.

In addition, two media outlets invited their viewers or readers to fill out questionnaires during the course of the campaign: the ABC, through Vote Compass, attracted over a million participants; Fairfax Media, through Your Vote (a collaboration between Fairfax, Kieskompas and the University of Sydney), attracted almost 212,000 (Posetti, Gauja and Vromen 2016; Koziol and Hasham 2016). Both were conducted under licence from Vox Pop Labs, where Vote Compass was first used in the 2011 Canadian elections, or from Kieskompas launched in Europe in 2014. Neither was an opinion poll in the sense of being based on a systematic sample; respondents simply volunteered. Neither made access to its results easy. Above all, neither revealed how respondents intended to vote.

The differences between the best of the poll results for the Coalition and the worst were politically significant even if they were not statistically significant. David Briggs, the pollster for both Galaxy and Newspoll, was confident that, on his figures, ‘Labor did not pose a serious threat to the government’ (quoted in Benson 2016). Roy Morgan, ignoring its own two-party preferred and an earlier prediction of a contest ‘too close to call’ (Roy Morgan Research 2016a), predicted the return of the government with 80–84 seats (Roy Morgan Research 2016b: 22). On her figures, by contrast, Jessica Elgood from Ipsos saw ‘a hung parliament’, based on the 2013 preference distribution, though, based on the respondents’ stated preferences, the outcome was less likely to be a hung parliament and more likely to be a Labor victory (Ipsos 2016).

Post-election, the verdicts among poll watchers were based not on these predictions but on the pollsters’ numbers. The ‘five [sic] major national pollsters’, Edmund Tadros of Fairfax Media remarked, weeks before the final result was known, ‘were within about one percentage point of the two-party preferred result’ (Tadros 2016). With the Coalition sitting on 50.1 per cent of the two-party preferred, when Tadros was writing, the Ipsos poll, he said, was closest. He was cherrypicking: Ipsos, conducted for Fairfax, had produced two results not one. One showed a 50–50 split; it was based on the 2013 distribution of preferences. This was the result given prominence by the Fairfax press (see Kenny 2016; Coorey 2016a). The other result, noted later in his article, showed the Coalition trailing 49–51; it was based on how respondents said they would distribute their preferences (Ipsos 2016). Ipsos, in its final report, had sent Fairfax journalists a mixed message. The report’s headline had the Coalition ‘edg[ing] forward’ since the previous poll; but the text had Labor ‘remain[ing] just ahead of the Coalition’, based on how respondents distributed their preferences, while ‘a hung parliament’ was on the cards if preferences followed the 2013 pattern (Ipsos 2016). This way the pollster had all bases covered.

The Australian focused not on how accurate the polls had been in estimating the two-party preferred but on how well the polls had estimated the parties’ share of the first preferences. On the Monday after the election, with 66 per cent of the two-party vote counted and Labor sitting on 50.2 per cent of the two-party preferred, the Australian’s own poll (Newspoll) appeared to have been less successful than Ipsos in predicting the two-party preferred, though more successful than the Australian’s stablemate (Galaxy). This was not the sort of contrast on which the Australian wished to dwell. By the Australian’s reckoning, however, Newspoll had ‘correctly predicted the primary vote’ (Hudson 2016); the average difference between the official count at the time and Newspoll’s estimate for the Coalition, Labor, the Greens and Others was about 0.2 percentage points. Ipsos (1.9 percentage points adrift) and ReachTEL (0.9 points), the two polls with which Newspoll was compared, had not got this close; neither had the Essential poll (0.8 points out) nor Galaxy (0.6 points). Newspoll had not only recorded the lowest average error, it had done best in predicting that the combined vote for the Greens, minor parties and Independents ‘would be at the highest level in 82 years’ (Hudson 2016). Newspoll’s figure of 23 per cent was closer to the 22.8 per cent (finally 23.3 per cent) for these parties recorded by the Australian Electoral Commission (AEC) (Hudson 2016) than was any other poll.

Weeks later, when the final count had been completed, the Coalition had pulled ahead of Labor, 50.4–49.6, a swing to Labor of 3.1 percentage points two-party preferred. Dennis Shanahan’s boast that ‘Newspoll was the most accurate of the major published polls on both the primary vote and the second preference vote’ (Shanahan 2016) was misleading. Newspoll and Essential, both of which had the Coalition on 50.5, shared the honour for best estimate of ‘the second preference vote’, with Essential putting the Coalition ahead only in its last poll, having had it trailing Labor throughout the campaign. Galaxy and ReachTEL had the Coalition on 51 per cent. Ipsos, the only poll to underestimate the Coalition’s two-party preferred, had it on 49 per cent (stated preferences) or 50 per cent (2013 preferences). Since the primary purpose of a poll is to report responses, not make predictions, the case for preferring the allocation made by the respondents rather than the pollsters is strong. This is reflected in Table 5.1.

In terms of first preferences—the measure against which polls in every other part of the world are held to account—the performance of the polls was ‘more varied’ (Tadros 2016). This was partly because the number of parties on which the pollsters reported varied. Two measures tell us how well the polls did: the average ‘error’ based on the parties for which all the polls provided estimates, and the average based on all the parties for which any of the polls provided estimates (see Table 5.2). The first measure is the more conventional. All the polls estimated the level of support for the Coalition, Labor, the Greens and Others. On this measure, the most accurate poll was Newspoll (the average difference between its estimates and the final results was just 0.2 percentage points), followed by ReachTEL (0.6), Essential (0.8)—notwithstanding Essential’s executive director claiming the number two spot (Lewis 2016)—Galaxy (1.2), Ipsos (1.9), Research Now (2.0) and Morgan (3.4). The polls conducted by Research Now and Morgan were taken a month or so out from the election.

Table 5.1. Final pre-election public opinion polls for the House of Representatives election, national voting intention, 2 July 2016 (percentages)

Poll

Mode

Fieldwork

Lib*/Nat** LNP

ALP

Greens

NXT

FF

CDP

PHON

KAP

Other/Independent

DK

N

2PP# (LNP)

Essential

Online

27–30 June

(39.5/3) 42.5

34.5

11.5

1.5^

10.5

[6/7]

(1,212)

50.5

Fairfax Ipsos

CATI

26–29 June

40

33

13

2.0

12

[8]

(1,377)

49

Galaxy

Online + Robo

28–29 June

43

36

10

11

[6]

(1,786)

51

Newspoll

Online + Robo

28 June – 1 July

42

35

10

13

[4]

(4,135)

50.5

ReachTEL

Robo

30 June

(38.9/3.9) 42.8

34.6

10.7

1.5^

0.6

9.8

[5.1]

(2,084)

51

Research Now

Online

23 May – 3 June

38

35

11

16

[22]

(1,437)

na

Roy Morgan

F-to-f- +SMS

21–22 & 28–29 May

(34/3.5) 37.5

32.5

13

5

1

11

[2.5]

(3,099)

49

Election

2 July

(37.2/4.9) 42.0

34.7

10.2

1.9

1.5

1.3

1.3

0.5

(3.8/2.8) 6.6

50.4

Notes. Lib: Liberal Party of Australia; Nat: National Party; ALP: Australian Labor Party; NXT: Nick Xenophon Team; FF: Family First; CDP: Christian Democratic Party; PHON: Pauline Hanson’s One Nation; KAP: Katter’s Australian Party; DK: Don’t know/undecided; N: number of respondents; na: not asked or calculated; F-to-f: face-to-face; ( ): Bracket indicates that the figures are not percentages; [ ]: Square bracket indicates that these percentages are not part of the array that add to 100.

* Liberal Party + Liberal–National Party (QLD)

** The Nationals + Country Liberals (NT)

^ South Australian respondents only

# Two-party preferred based on the distribution of minor party preferences at the 2013 election, except for Essential and for Fairfax Ipsos, both of which reported a two-party preferred based on 2013 preferences (50–50 in both cases) while highlighting Labor’s position based on which of the two sides respondents said they preferred.

Question: ‘Have you already voted in the Federal election – which is being held this weekend?’ If YES [22%]: ‘Which party did you give your first preference to: Liberal [all except QLD], National [all except QLD, SA, TAS], Liberal National [QLD only], Labor, Greens, Nick Xenophon Team [SA only], Family First, Independent or other Party, Prefer not to answer [6%]’. ‘Which party did you give your second preference to – out of the Liberal Party and the Labor Party?’ ‘Which party would you give your second preference to – out of the Liberal Party and the Labor Party?’ If NO [78%]: ‘To which party will you probably give your first preference vote in the Federal election being held this Saturday? Liberal [all except QLD], National [all except QLD, SA, TAS], Liberal National [QLD only], Labor, Greens, Nick Xenophon Team [SA only], Family First, Independent or other Party, Prefer not to answer [7%]’. ‘Which party will you give your second preference to – out of the Liberal Party and the Labor Party?’ If not sure; ‘Which party are you currently leaning toward? Liberal [all except QLD], National [all except QLD, SA, TAS], Liberal National [QLD only], Labor, Greens, Nick Xenophon Team [SA only], Family First, Independent or other Party, Don’t know’. ‘Which party will you give your second preference to – out of the Liberal Party and the Labor Party?’ (Essential).

Features characteristic of the inter-election period as a whole were not always evident in the final polls. While three firms (Ipsos, Research Now, Roy Morgan) underestimated the Coalition vote—an ‘industry-wide’ feature of polls conducted between the 2013 and 2016 election (see Jackman and Mansillo, Chapter 6, this volume)—four (Essential, Galaxy, Newspoll, ReachTEL) did not. And while five organisations (Essential, Ipsos, ReachTEL, Research Now, Roy Morgan) overestimated the Green vote—another ‘industry-wide’ feature of polls conducted over this period—two (Galaxy, Newspoll) did not.

The second measure takes into account the fact that Essential, Morgan and ReachTEL also measured support for the Liberal and National parties separately; that Essential, Ipsos, Morgan and ReachTEL measured support for the Nick Xenophon Team; and that Morgan and ReachTEL measured support for Katter’s Australian Party. Essential, Morgan and ReachTEL under-reported support for the National Party—a longstanding problem for the polls, which is why most reported a figure only for the Coalition (see Goot 2012: 92; 2015: 129). Essential and ReachTEL also under-reported support for the Nick Xenophon Team, while Morgan vastly exaggerated it. For Essential and ReachTEL, the average of their differences on this measure was greater, as Table 5.2 shows, than the average of their differences on the more conventional measure; for Essential it was 1.1 percentage points (0.8 on the more conventional measure) and for ReachTEL 0.9 percentage points (0.6). ReachTEL’s own analysis showed it at 0.7 percentage points notwithstanding (ReachTEL 2016a). The performance of Ipsos, by contrast, was better on this measure, as was Morgan’s. By either measure, however, every pollster proved better (or no worse) at predicting the two-party preferred.

Table 5.2. Mean differences between the final national polls and the election results, 2016 (percentage points)

First preferences

2PP

Poll

Method

All parties and Others*

LNP, ALP, Greens, Other

LNP

Essential

Online

1.1 (6)

0.8

+0.1

Fairfax Ipsos

CATI

1.5 (5)

1.9

–1.4

Galaxy

Online + robo

1.2 (4)

1.2

+0.6

Newspoll

Online + robo

0.2 (4)

0.2

+0.1

ReachTEL

Robo

0.9 (6)

0.6

+0.6

Research Now

Online

2.0 (4)

2.0

na

Roy Morgan

Face-to-face +SMS

1.8 (6)

3.4

–1.4

Median

1.2

1.2

+0.1

*Numbers in brackets indicates the number of results (parties and Independents) reported

na: not asked or calculated

Source. Derived from Table 5.1, except for Roy Morgan’s two-party preferred difference, which derives from a poll taken on 4–5 and 11–12 June 2016 (Roy Morgan Research 2016b: 15, 28).

Immediately after the election, Chris Mitchell, former editor-in-chief of the Australian, insisted that Newspoll had not only come closest to the actual result (at the time, Ipsos was closest on the two-party preferred, Newspoll closest on first preferences), but also that in doing so it had maintained ‘its three-decade-long reputation as the best and most influential political pollster’ (Mitchell 2016).While Newspoll’s reputation undoubtedly remains high, it generally has not had the better of its rivals (see Goot 2012: 93–97, for 1987–2010; 2015: 128–33, for 2013). In any event, whether the Newspoll of 2016 was the same poll as the ‘three-decade-long’ Newspoll is moot. From July 2015, Newspoll outsourced its operation to Galaxy Research where it switched from CATI to a combination of online and IVR; a change that the formal announcement did not note (Australian 2015) and something to which the Australian never drew attention. The change had important consequences. From marginally underestimating Labor’s first preferences, Newspoll now overestimated them (see Jackman and Mansillo, Chapter 6, this volume).

Ipsos, which started polling for Fairfax in October 2014 after Nielsen withdrew, continued to run CATI.1 Fairfax’s various providers had therefore shifted from face-to-face interviewing to CATI over the last 40 years. This raises the question of identity in a different way: can a poll conducted without interviewers be the same, in anything but name, as a poll conducted with interviewers?

Aside from their statistical accuracy, polls are also judged by their ability to predict who will form government. ‘So convinced were we about a Coalition win’, Peter Martin wrote in Monday’s Sydney Morning Herald, two days after the election, ‘that we paid scant regard to what was happening before our eyes.’ Martin was about to repeat a widespread misunderstanding of the relationship between vote shares (two-party preferred) and seat shares—a misunderstanding that was a feature of the ways in which polls were reported throughout the campaign. ‘Two days before the election, when the Fairfax Ipsos poll was split 50–50,’ he continued, ‘eight out of ten voters polled thought the Coalition would win.’ With the two-party preferred count showing a 50.2–49.8 split in favour of Labor in the counting after the election and Labor ‘steadily’ increasing its seat tally, it had become ‘theoretically possible’ for Labor ‘to form a minority government if it could persuade enough of the six successful independent and minor party candidates to join it’ (Martin 2016).

But a 50–50 split in the two-party preferred never meant that the chances of Labor and the Coalition forming government were equal if the electoral pendulum was a reliable guide. To win office in its own right, Labor needed a national two-party preferred swing not of 3.5 percentage points (the Coalition had won 53.5 per cent of the two-party preferred in 2013) but of 4 percentage points. A swing of 3.5 percentage points would have yielded a gain of 14 or 15 seats, not the 17 seats Labor needed to secure an absolute majority of 20 seats—if we ignore the fact that three seats in New South Wales (NSW) had become notionally Labor following the 2015 redistribution. A 50–50 split would not only have been insufficient if Labor were to govern in its own right (something Elgood realised), but it also made it more likely that if there were to be a minority government it would be formed by the Coalition rather than by Labor. The final two-party preferred vote of 50.4 for the Coalition and an absolute majority of one meant that the number of extra seats Labor won—11 if we ignore the seats that were notionally Labor already—was precisely the number one might have predicted from the pendulum.

Exit and day-of-the-election polls

For election night, covered on all the free-to-air stations and on Sky News, only Channel Nine commissioned an exit poll. Conducted by Galaxy in ‘25 Coalition held marginal electorates’, between 8 am and 12.30 pm, the poll showed a swing to Labor of 3.4 percentage points—a 50–50 two-party preferred. The booths selected were in Coalition seats that required swings of up to 4.5 percentage points, and in the three NSW seats that were notionally Labor already. Galaxy solicited respondents’ first preference vote and then calculated a two-party preferred. In doing so, it assumed that Labor would secure ‘80% of Green preferences, 70% of Nick Xenophon preferences and just over half from all other minor parties’. With Labor ‘picking up as many as 14 Coalition held seats’—eight in NSW, two in Queensland (QLD), one in South Australia (SA), two in Tasmania (TAS)—this left the result ‘in the balance’ and ‘likely to be tight’ (Galaxy 2016). Galaxy did well. Its error, two-party preferred, across the seats it polled was just one tenth of a percentage point. On first preferences, it was out by an average of just 0.6 percentage points for all parties or an average of 0.5 percentage points if we consider just the Coalition, Labor, Greens and Others.

Sky News also commissioned a poll for its election night coverage, but it was neither an exit poll nor a day of the election poll, strictly speaking. Conducted by OmniPoll, a market research firm run by the former CEO of Newspoll, Martin O’Shannessy, the poll straddled the day of the election and the day before the election. The poll failed to divulge how its respondents had voted or intended to vote; more remarkably, Sky News did not require it. The information was gathered but not published because the budget allowed for only 500 interviews, a base the pollster deemed too small to produce a result that was sufficiently reliable.

Rather than try to second-guess the election result before the formal count had begun—the traditional payoff for television stations covering the results—OmniPoll confined itself to reporting three quite different things. One was the issues that respondents rated ‘very important’. The second was whether respondents were more strongly influenced by their ‘liking of the party’ they had voted for (or intended to vote for) or their ‘disliking of the other parties’. And the third was the party that respondents thought would win. The advantage of reporting the answers to these questions and not those to the voting question was that both OmniPoll and Sky News avoided the damaging publicity that would have followed had it misreported the vote. Prudent though its decision with a sample of just 500 might have been, Omnipoll had not reported voting intention figures even when its samples were more than twice as large. This meant that none of its polls ever attracted much attention.

Seats in play

‘Despite elections being awash with polls’, one advertising executive remarked after the 2013 election, ‘very few are relevant because most are national polls’ and ‘[i]t is always in the marginal seats that voter intention matters’ (Madigan 2014: 40). This is mistaken. In 2016, the number of single-seat polls far outnumbered the number of national polls; the same was true in 2013 (Goot 2015: 133). A hallmark of campaign professionals is not just their insistence that it is the swings in the marginal seats that determine the outcome but also that it is the campaign in the marginals that matters (Loughnane 2015: 199; Mills 2015: 123), even when national swings that predict the outcome perfectly, as in 2013, show this is not necessarily so (Goot 2016: 77). Media budgets, however, even more constrained than in 2013, made the commissioning of additional polls in 2016 more difficult, even with polls as inexpensive to run as robo polls (Goot 2014). That the race was sure to be tighter than last time made little difference.

In 2013, the media commissioned 83 polls in single seats (Goot 2015: 133). This time the number was 66. The number of polling organisations involved was fewer as well: just three—Galaxy in 29 seats; Newspoll, 13; ReachTEL, 24—down from five in 2013. And for the first time all the commissioned polls were robos; no media outlet was prepared to pay for interviewers.

Predictably, there was no consensus over which seats to poll, a reflection of the diversity of audience interests as much as it was a judgement of what seats were worth watching. Of the 40 seats polled, only 10 were polled by more than one organisation. A third of the seats polled were polled more than once: three were polled four times (Lindsay and Macarthur in NSW, Corangamite in Victoria (VIC)); two (Dobell, NSW and Bass, TAS) were polled three times; 13 were polled twice (Banks, Reid and Gilmore, NSW; Dunkley, VIC; Brisbane, Capricornia, Herbert and Longman, QLD; Mayo, SA; Braddon, Denison, Franklin and Lyons, TAS).

The seats most vulnerable on paper were not necessarily the seats most frequently polled; some of the most vulnerable were not polled at all. Of the 25 Liberal–National Party seats held on margins (two-party preferred) of less than six percentage points (the definition of a ‘marginal’ seat used by the Australian Electoral Commission), 18 were polled and seven were not. The seats not polled included Solomon (NT), where limited Indigenous access to landlines may have put off some (but cf. Walsh 2016); Eden-Monaro, a ‘bellwether’ seat, as well as Macquarie and Page (NSW); La Trobe (VIC); and Forde (QLD). Held by margins of 4.5 percentage points or less, these seats were either not considered to be in play or written-off by the media as certain losses; Solomon, Eden-Monaro and Macquarie would fall to Labor. Ten of the Coalition’s seats not classified as ‘marginal’ were also polled. Of these, Herbert and Longman (QLD) would fall to Labor and Mayo (SA) to the Nick Xenophon Team. Nine Labor seats, including the notionally Labor seat of Dobell (NSW), were polled too; Chisholm (VIC) would be the only one Labor lost. Two other seats that were polled were ‘safe’, though not for either the Coalition or Labor: Denison (TAS) held by Andrew Wilkie; Kennedy (QLD) held by Bob Katter.

There was nothing inherently odd about any of this. Introducing the pendulum to Australian politics, Malcolm Mackerras had emphasised that seats could swing in both directions; that the seats requiring the smallest swings to change hands were not necessarily the seats most likely to fall; and that the seats requiring the largest swings were not necessarily the safest (Mackerras 1972: 5). The only thing odd was to describe all the polls in single seats as ‘marginal seat’ polls.

Who polled in which seats, for whom and when? The first six of Galaxy’s single-seat polls were conducted in the first two days of the campaign, 10–11 May, for the Daily Telegraph; another two, a day later, were conducted for the Courier-Mail. Galaxy’s next poll, for the Advertiser, was not conducted until 15 June. During the middle weeks of the campaign, none of News Corp’s metropolitan mastheads commissioned any single-seat polling. In mid-June, Newspoll did single-seat polls for the Australian. Most of Galaxy’s single-seat polls were conducted a week later, 20–22 June (four seats for the Herald Sun, six for the Courier-Mail, two for the Advertiser) or 21–22 June (when it re-polled the original six for the Daily Telegraph). Another two seats, Adelaide and Port Adelaide, were polled for the Advertiser on 28–29 June, the last Tuesday and Wednesday—in time for Friday’s paper on election eve.

ReachTEL started and finished its single-seat polling at roughly the same time as Galaxy, but a higher proportion of its polls were conducted at the beginning and in the middle of the campaign. It polled five seats on Thursday 12 May for the Mercury (the only News Corp masthead to commission polls from a company other than Galaxy), Macarthur (NSW) the following Thursday for 7 News, and Corangamite (VIC) for 7 News the Thursday after that. On Thursday 9 June, ReachTEL produced eight more polls, seven of them for Fairfax—two in NSW and VIC, and one in each of the other mainland States; halfway through a very long campaign, they helped Fairfax inject something of interest into its reporting. A week later, ReachTEL polled Hasluck (WA) for 7 News. On Thursday 23 June, it conducted another six polls (covering the five seats in TAS it had polled on 12 May plus Cowper in NSW) with a final poll in Chisholm on Thursday 30 June. Polling on Thursday meant the results were ready for the last Friday evening’s TV news or for the Saturday papers.

Aside from their news value, how useful were these polls as guides to the results? The short answer: not very. One measure of their success is the extent to which they picked the winners in these seats. On this measure, Newspoll did best and ReachTEL worst. In the 13 seats it polled, Newspoll predicted the winner in nine. In the 23 seats Galaxy polled, it had the eventual winner ahead in 15. In the 21 seats ReachTEL polled for either its media clients or the New South Wales Teachers Federation (2016), and for which it calculated a two-party or two-candidate preferred (it failed to do so in Macarthur), it picked the winner in just 10—roughly the number it would have got right had it assigned the two most likely candidates to win or lose at random.

A better measure is the difference between the estimates provided by the polls and the final result, two-party preferred. Here, Galaxy did best; Newspoll, worst. But the best was not very impressive and the worst was pretty poor. For Galaxy, the median difference across its 23 polls was 2.1 percentage points, the mean 3.0 points, with the differences ranging from 0.1 percentage points (Corangamite) to 12.3 percentage points (Port Adelaide). For ReachTEL, the median difference for its 21 polls was 2.6 percentage points, the mean 3.3 percentage points, the differences ranging from 0.6 percentage points (Deakin) to 8.3 percentage points (Macquarie). Across the 13 seats polled by Newspoll, the median difference was 4.1 percentage points, the mean 4.0, the best being 0.1 percentage points in Robertson, the worst Macarthur (8.3), Bass (8.1) and Batman (7.6). State by State, too, the performance of each of the pollsters was very uneven.

Table 5.3. Differences between polls’ estimate of party support and the final vote, single seats, campaign period, 2016 (percentage points)

Two-party preferred

LNP, ALP, Greens, Other

All entities

Poll

Dates

N

Mean

Median

Mean

Median

Mean

Median

Galaxy

20–29 June*

21

3.0

2.1

2.6

2.4

2.4

2.2

Morgan

Jan – 11–12 June

31

na

na

5.0

5.3

4.7

5.1

Newspoll

13–15 June

13

4.0

4.1

2.9

2.9

2.9

2.9

ReachTEL

12 May–30 June‡

21

3.3

2.6

na

na

na

na

Note. Where a firm conducted more than one poll in a seat only the poll conducted closest to the election is included.

* Excludes Herbert and Leichhardt (QLD) polled 12 May; with these two seats included, the mean for LNP, ALP, Greens, Other is 3.3 and the median is 2.6 percentage points

‡ Excludes Macarthur (NSW) for which no two-party preferred was reported

Source. David Briggs, pers. comm., 23 October 2017, for Galaxy and Newspoll; Roy Morgan Research 2016c, 2016d; ReachTEL 2016c, 2016d, 2016e, 2016f; Clark 2016; Nicholas Clark, pers. comm., 1 September 2016; Mark Kenny, pers. comm., 2 August 2016.

The best measure—the most direct—is the average difference in first preferences. On this measure, the results are even less impressive. Galaxy estimated the level of support for the Coalition, Labor, the Greens and Others. The average difference between its measure and the actual results was 3.3 percentage points, the median difference 2.6 points. The mean was blown out by the very early polls (12 May) in Herbert, where the ‘Other’ vote was underestimated by 28 percentage points, and Leichhardt, where it was underestimated by 14.8 percentage points, thanks largely to Galaxy’s not having either Pauline Hanson’s One Nation (PHON) or Katter’s Australian Party (KAP) on the list of parties from which respondents could choose. In neither case, however, did this appear to have any impact on Galaxy’s estimate of the two-party preferred. If we exclude these two polls, the mean difference drops to 2.6 and the median to 2.4 percentage points. In 14 seats, Galaxy estimated support for some of the minor parties and Others separately. If we use this measure (and exclude Herbert and Leichhardt), the average difference between Galaxy’s measure for every party (including Others) and the final figures was 2.4 percentage points with a median of 2.2 percentage points. For Newspoll, the corresponding means and medians were 2.9 percentage points, regardless of whether the measure was based on Labor, LNP, Greens and Other, or on all the parties for which Newspoll furnished figures. ReachTEL did not always produce comparable first preferences; it showed how the ‘don’t knows’ might divide after being pressed to name the party to which they were ‘leaning’, but only Fairfax asked it to add these to its initial results to provide an overall distribution of its first preferences figures.

Morgan also released figures for a number of seats based on a combination of SMS texting and face-to-face interviewing. These figures proved to be the least reliable of all. The first tranche, which reported the responses from 1,951 South Australian voters between 2–3 April and 11–12 June, covered 11 seats (about 180 interviews per seat) where Morgan expected the Nick Xenophon Team to do well and to ‘nick’ the Coalition seats of Mayo and Grey (Roy Morgan Research 2016c). The second, derived from the Greens’ 20 ‘best performing electorates’ in 2013, was based on 6,283 respondents (about 310 per seat) contacted between January and 11–12 June; it led Morgan to predict that Labor would lose Batman in Victoria (Roy Morgan Research 2016d). Morgan’s errors were not confined to poor predictions. Its first preferences figures (it avoided the two-party preferred) were woefully poor guides. As Table 5.3 shows, both the mean error (5.0 or 4.7 percentage points, depending on which parties one includes) and the median error (5.3 or 5.1 percentage points) were twice the size of those recorded by Galaxy seat-by-seat or by Newspoll.

Why such large errors? The time that elapsed between the taking of some of the polls and the holding of the election made a difference. However, leaving aside those cases where the time elapsed was considerable, the late polls were not necessarily more accurate than the early polls (see also Jackman and Mansillo, Chapter 6, this volume), the time between the taking of the polls and the casting of votes was reduced to some extent by the increased frequency of early voting. Sample size, almost certainly, played a bigger part. The number of respondents interviewed by Galaxy ranged from 502 to 714, the corresponding figures for ReachTEL being 610 to 836. Morgan’s samples were smaller still; even so, ‘[s]tated confidence intervals’ were ‘far too small’ (ibid.). No doubt, trouble drawing samples also played a part; robo-polling is restricted to landlines and oversamples women and older voters, those most frequently at home and most likely to answer the phone. That few seats were polled more than once or by more than one pollster cannot have helped either; the way pollsters adjust their raw figures in national polls depends in part on their being able to operate in an information-rich environment, allowing them to look back to how particular demographic groups responded in earlier polls and across at the published figures in other polls.

The Senate

A Senate with a substantial crossbench that the government would struggle to control was as widely anticipated as the return of the government. Yet, while media outlets commissioned polls to determine the likely outcome in the House, they commissioned not a single poll on the Senate. One reason was the cost; it always has been. Elections for the Senate are State based; Senate polling, done properly, requires separate samples in each State. Another reason may have been the complexity of the ballot. The number of parties nominating candidates for the Senate is very large. In NSW, a ballot paper had to be fashioned to make room for over 40 parties and Independents; in no State did the ‘tablecloth’ list fewer than 22. Moreover, where a ballot may require six places to be filled (let alone 12), with the final place (or places) to be determined on the basis of complex preference flows, converting poll numbers into seats has become fiendishly difficult. But the main reason for the dearth of polls was the lack of interest. No matter the Senate’s importance, interest in Senate elections has always run a poor second to elections for the House.

The Australia Institute was responsible for the only poll released on the Senate. Conducted online between 23 May and 3 June, the poll, conducted by Research Now, reported the level of support for each of 10 parties (including Independents) among 1,427 respondents Australia-wide. Compared with the election results four weeks later, it overestimated support for Labor and the Greens, underestimating support for minor parties and Independents (Oquist 2016). It estimated support for the Liberals (34 per cent) and Nationals (2 per cent) at 36 per cent (the Coalition secured 35.2 per cent of the vote), Labor at 33 per cent (29.8 per cent), the Greens at 12 per cent (8.7 per cent), PHON at 5 per cent (4.3 per cent), the Nick Xenophon Team at 4 per cent (3.3 per cent), the Jacquie [sic] Lambie Network at 1 per cent (0.5 per cent), the Palmer United Party at 0 per cent (0.2 per cent), the Glen Lazarus Team at 0 per cent (0.3 per cent), Independents and Others at 8 per cent (17.8 per cent).

The underestimation of the vote for the minor parties, and the failure to conduct the poll State by State, led to a series of conclusions that were of limited value (Oquist 2016): that the proportion intending to just vote 1 ‘could lead to a big exhaustion of votes’ (it did not; the informal vote was less than 4 per cent); the ‘big exhaustion … could mean last seats will be won with low primary vote’ (a predictable outcome, regardless of any polling); ‘Hanson likely to be elected in Queensland’ (not one but two PHON candidates were elected in Queensland); ‘Xenophon a chance of picking up seats outside South Australia’ (he was not); ‘Andrew Bartlett (Greens) a chance of returning to Parliament’ (Bartlett was never in contention given the Greens’ vote); and ‘Coalition may require either Hanson or Greens votes in Senate to pass legislation’ (since the Greens lost a seat, neither would prove sufficient).

Reflections

Post-election, polls are largely assessed on their predictive value. In 2016, Newspoll’s estimates were the best; Galaxy finished equal third (two-party preferred) or fourth (first preferences measured in two ways), though its exit poll was about as good as it gets. Perhaps, one rival suggested, Newspoll did best because it polled last (Lewis 2016). Polling as close to an election as possible helps pick up any late swing. It also gives pollsters a ‘last-mover advantage’ (Goot 2012: 107), notwithstanding the risk that everyone gets it wrong. During the 1959 British election, one market research firm, appearing to be involved in producing polls for more than one brand, caused a stir (Butler and Rose 1960: 99–100). At this election, having Galaxy Research produce both the Galaxy poll and Newspoll was barely noticed.

Since estimates of the two-party preferred have become the pre-eminent measure of the vote (Goot 2016), it is important to note that while the estimates offered by some polls (Galaxy, Newspoll and ReachTEL) were based on how preferences flowed at the last election, those collected by Essential were based on respondents’ own reports, while those presented by Ipsos were based on both. Ideally, we should be able to compare two-party preferred results arrived at in the same way; preferably, the way respondents imagine they will distribute them rather than the way they were distributed at the last election (when some of the small parties did not even exist). Unless we can compare like with like, comparisons across polls remain flawed. The lower priority attached to how well the polls did in estimating the parties’ first preferences is a corollary of the focus on the two-party preferred. So, too, is the continuing lack of interest in commissioning polls on the Senate; an election to which the two-party preferred does not apply for political reasons, not technical ones (Goot 2016: 83).

That the national polls did well, especially in relation to the two-party preferred, is important; Wayne Errington and Peter van Onselen’s (2016: 150) lament that ‘published polls aren’t what they once were’ is misleading. Whether the accuracy of pre-election polls is a generalisable test of how well the polls measure party support between elections is a separate matter. The potential not just for a last-mover advantage but also for ‘herding’ (Linzer 2012) is one thing that sets elections apart. Another is the way elections concentrate respondents’ minds. This does not necessarily make polls in a non-election period ‘hopelessly hypothetical’ (Brent 2016), but it may mean that they need to be read differently. In addition, ahead of the vote, final samples tend to be uncommonly large. Newspoll more than doubled its final number to 4,135 (its sample size for its previous poll was 1,713), Galaxy boosted its sample to 1,786 (previously, 1,390) and Essential increased theirs to 1,212 (previously, 1,000). Only two polls did not increase their samples: Fairfax Ipsos and ReachTEL slightly reduced them. Even if national polls are good at measuring party support, it does not follow that they are good at measuring anything else, whether during the campaign or in the inter-election period more broadly. But that’s another story.

As for the single-seat polls—‘marginal seats’ is a misnomer for a practice that takes in seats that are not marginal and ignores seats that are—their accuracy did not become a topic in the media’s post-election debate; as the pollsters knew, they were never likely to become a topic. That the polling in single seats was overlooked in the wash-up was just as well since their performance was far from impressive. Except for their value to the media as attention grabbers, polls with small numbers in ‘battleground’ seats remain dubious additions to polls nationwide. State-wide polls, whatever their commercial merits, would make more political sense. They would not satisfy an editor’s interest in ‘bellwether’ seats, seats very narrowly held or other seats of special interest; however, by cannibalising the samples used in single-seat surveys, they would throw a more reliable light on State differences than national polls do—differences that, not for the first time, cancelled out in the national swing of 2016.

A close result, even a one-seat majority, which could have been inferred from simply knowing what the national swing would be (two-party preferred), raises fundamental questions not about marginal seat polls but about marginal seat campaigning. In net terms, local campaigns at this election, as in the last, would seem to have counted for little. Labor’s marginal seat research conducted in the final week of the campaign, Troy Bramston argues, shows how ‘saving Medicare’ was used with ‘devastating effectiveness’ in NSW ‘in winning Lindsay, Macarthur and Paterson’. The evidence he advances is the proportion of respondents in these seats mentioning some variant of ‘saving Medicare’, in research commissioned by the Labor Party, as ‘the most important factor in deciding their vote’ (Bramston 2016). But note that in Lindsay, where the two-party preferred swing to Labor was 4.1 percentage points (not much greater than the State-wide swing of 3.8 percentage points), 31 per cent nominated ‘saving Medicare’ (whether volunteered or from a list is not clear). In Paterson, where the swing was 10.4 percentage points, more than twice as great, the proportion (28 per cent) that had ‘protecting Medicare’ as the ‘most important factor’ was about the same. However, in Macarthur, where the proportion mentioning ‘Medicare and bulk-billing’ (42 per cent) was much greater, the swing (11.7 percentage points) was little greater than in Paterson. Evidence of ‘devastating effectiveness’? Leaving aside the question of whether the issue drew voters to Labor or whether the campaign simply gave Labor voters an issue they could name, the match between the number of times the issue was cited and the size of the swing is not very close.

If Labor’s ‘Mediscare’ was as successful as the conventional wisdom assumes (see, for example, Aston 2016; Errington and van Onselen 2016: 120, 179–80; but cf. Street 2016: 319–20), and as strategists on both sides appear to believe (see Aston 2016; Massola 2016), it can only have succeeded by neutralising some equally sizeable advantage the Coalition must have enjoyed before the scare campaign; otherwise, the Coalition would have defied the pendulum and won a disproportionate share of the seats. However, there is no clear evidence of the Coalition enjoying any advantage in the marginals prior to ‘Mediscare’. If they did, there is no evidence from the campaign that they thought Labor had reduced it. ‘[T]he Coalition appeared genuinely confident’, Errington and van Onselen reported at the end of the campaign, ‘that it could win’ not 76 seats that might have been predicted from its share of the two-party preferred vote but ‘somewhere between 79 and 83 seats’ (2016: 151; see also Di Stefano 2016: 182, 194, 205, 208–209).

If ‘Mediscare’ worked in marginal seats, it is difficult to say why it would not have worked nationally. The Omnipoll for the election-night coverage on Sky News, in which more respondents named ‘health and Medicare’ than named any other issue (from a list of seven) as ‘very important’ to their vote—a result said to have ‘sent a chill down the spines of those watching at Coalition HQ’ (Di Stefano 2016: 215)—was conducted not in marginal seats but nationally. (The Galaxy exit poll for Channel Nine, conducted mainly in Coalition marginals, also had ‘health and Medicare’ as the number one issue from a list of 11; Galaxy 2016.) Yet, the evidence from national polls is that there was very little movement in voting intention between the dissolution of the Parliament on 21 March and the day of the election, 2 July. If there was a small rise in Labor support in the last month of the campaign, there was also a small rise in the Coalition’s support and in the Coalition’s two-party preferred vote (see Jackman and Mansillo, Chapter 6, this volume, Figure 6.3; neither rise was statistically significant). Asked weekly by ReachTEL, between 2 June and 30 June, which of seven issues would influence their voting decision ‘most’, the proportion of respondents that nominated ‘health services’ changed hardly at all: 21 per cent at the beginning of the period, nine days before Bob Hawke’s intervention started the ‘Mediscare’, 23 per cent at the end. Over the same period, the proportion nominating ‘management of the economy’ rose from 24 per cent to 30 per cent (ReachTEL 2016b). If the Liberal Party’s polling encouraged the government to believe it might be returned with a more comfortable majority on this evidence it was not because it missed the damage wrought by the ‘Mediscare’.

For an alternative explanation of how many seats were won or lost and which seats they were, we need to come to grips with the electoral forces at work nationally, including the slide in support for the Prime Minister from around March 2016 (Wikipedia 2016), but almost certainly involving other factors that, except in very close contests, make election results predictable from the outset of the campaign (Gelman and King 1993). In addition, we need to consider regional factors, including the electoral standing of the various State governments (Street 2016: 298), and State-based or regionally based differences in economic wellbeing. For Labor to win the six most marginal Coalition seats at the next election, it may need (as Shorten insists) to win just 2,000 extra votes (Coorey 2016b). It is all but inconceivable that Labor could win these seats, however, without winning many times this number of votes across the nation, including the corresponding States.

The paradox of the pendulum is that if it points campaigners to seats that they target, so that they win more seats than could be predicted from a national swing, then the pendulum does not work; but if it does work—as it did this time and it has, by and large, before (Goot 2016: 76, 86)—then we may have to accept the conclusion that when everyone targets the same seats, neither one side nor the other is likely to prevail. Yes, some seats will be won or saved. However, an equal number of seats is likely to be lost. In individual seats it may be only ‘the last-minute scare’—when there is not enough time for the other side ‘to learn of the tactic and denounce it as an outrageous lie’—that has any hope of being decisive (MacCallum 2002: 105–6); in 2016, ‘Mediscare’ was denounced before election day. On the most radical view, even the best-researched and best-resourced marginal-seat campaigns that meet no opposition may prove futile: ‘energetic’ campaigns run locally may be no more efficacious than ‘idle’ or ‘incompetent’ ones (Butler 1997: 235; but cf. Studlar and McAllister 1994: 402–4). It is the possibility that one side can prevail in the marginals, defying the pendulum, which keeps campaigners enthralled and pollsters floundering in their wake.

Acknowledgements

My thanks to Tom Swann of the Australia Institute, Andrew Bunn of Essential Media Communications, Mark Kenny of Fairfax Media, David Briggs of Galaxy Research and Newspoll, Jessica Elgood at Ipsos, Nicholas Clarke from the Mercury, Mark Kenny from Fairfax Media, Martin O’Shannessy of Omnipoll, James Stewart of ReachTEL, Julian McCrann at Roy Morgan Research and John Utting at Utting Research. The research was supported by the Australian Research Council under DP150102968.

References

Aston, Heath. 2016. ‘ALP appoints Victorian party boss Noah Carroll to steer next federal election campaign’. Sydney Morning Herald, 23 September. (Original title: ‘ALP appoints Carroll to steer next campaign’). Available at: www.smh.com.au/federal-politics/political-news/alp-appoints-victorian-party-boss-noah-carroll-to-steer-next-federal-election-campaign-20160923-grn6fw.html

Australian. 2015. ‘Galaxy research to conduct polling for Newspoll’. Weekend Australian, 4 May. Available at: www.theaustralian.com.au/business/media/galaxy-research-to-conduct-polling-for-newspoll/news-story/aced8102424eee2fe77f02cdb5fdb0da

Australian Labor Party (ALP). 2016. ‘Bob Hawke speaks out for Medicare, do you?’ YouTube, 11 June. Available at: www.youtube.com/watch?v=pZ9EfrpPcQs

Benson, Simon. 2016. ‘Malcolm is in top slot’. Daily Telegraph, 1 July.

Bramston, Troy. 2016. ‘Federal election 2016: Labor’s post-poll lesson should be on arithmetic’. Weekend Australian, 12 July. Available at: www.theaustralian.com.au/opinion/columnists/troy-bramston/federal-election-2016-labors-postpoll-lesson-should-be-on-arithmetic/news-story/0f2a12a4a40b6477697e27e82c40fcaf

Brent, Peter. 2016. ‘We’re drowning in opinion polls, so here’s what to make of them’. The Drum, ABC, 8 April. Available at: www.abc.net.au/news/2016-04-08/brent-we’re-drowning-in-opinion-polls/7309796

Butler, David. 1997. ‘Six notes on Australian psephology’. In Clive Bean, Scott Bennett, Marian Simms and John Warhurst (eds), The Politics of Retribution: The 1996 Election. St Leonards: Allen & Unwin, pp. 228–40.

Butler, David and Richard Rose. 1960. The British General Election of 1959. London: Macmillan.

Clark, Nick. 2016. ‘Poll suggests it could be adios to Tasmania’s three amigos’. Mercury, 25 June. Available at: www.themercury.com.au/news/tasmania/poll-suggests-it-could-be-adios-to-tasmanias-three-amigos/news-story/96f94f437b6be8c252fa350476fa317f

Coorey, Phillip. 2016a. ‘PM set to scrape home’. Australian Financial Review, 1 July.

——. 2016b. ‘The Turnbull government is in a weak position, so people believe the worst’. Australian Financial Review, 4 November. Available at: www.afr.com/news/the-turnbull-government-is-in-a-weak-position-so-people-believe-the-worst-20161031-gsf4tr

Di Stefano, Mark. 2016. What a Time to be Alive: That and Other Lies of the 2016 Campaign. Melbourne: Melbourne University Press.

Errington, Wayne and Peter van Onselen. 2016. The Turnbull Gamble. Melbourne: Melbourne University Press.

Galaxy. 2016. ‘Election result on a knife edge’, 2 July.

Gelman, Andrew and Gary King. 1993. ‘Why are American presidential election campaigns so variable when votes are so predictable?’ British Journal of Political Science 23(4): 409–51. doi.org/10.1017/S0007123400006682

Goot, Murray. 2012. ‘To the second decimal point: How the polls vied to predict the national vote, monitor the marginals and second-guess the Senate’. In Marian Simms and John Wanna (eds), Julia 2010: The Caretaker Election. Canberra: ANU E Press, pp. 85–110. Available at: press-files.anu.edu.au/downloads/press/p169031/pdf/ch06.pdf

——. 2014. ‘The rise of the robo: Media polls in a digital age’. In Australian Scholarly Publishing’s Essays 2014: Politics: The first volume of a new series of works by Australian scholars. North Melbourne: Australian Scholarly Publishing, pp. 18–32.

——. 2015. ‘How the pollsters called the horse-race: Changing polling technologies, cost pressures, and the concentration on the two-party-preferred’. In Carol Johnson and John Wanna with Hsu-Ann Lee (eds), Abbott’s Gambit: The 2013 Election. Canberra: ANU Press, pp. 123–41. doi.org/10.22459/AG.01.2015.08

——. 2016. ‘The transformation of Australian electoral analysis: The two-party preferred vote − origins, impacts, and critics’. Australian Journal of Politics & History 62(1): 59–86. doi.org/10.1111/ajph.12208

Hudson, Phillip. 2016. ‘Newspoll correctly predicts the tight votes outcome’. Australian, 4 July.

Ipsos. 2016. ‘Election race neck and neck - but coalition edges forward’. 1 July. Available at: ipsos.com.au/fairfax/election-race-neck-and-neck-but-coalition-edges-forward/

Kenny, Mark. 2016. ‘Cliffhanger’. Sydney Morning Herald, 1 July.

Koziol, Michael and Nicole Hasham. 2016. ‘Labor’s odds blow out, but what’s the reason?’ Sydney Morning Herald, 1 July.

Lewis, Peter. 2016. ‘Close call: How the pollsters got the election result right’. Guardian, 8 August. Available at: www.theguardian.com/commentisfree/2016/aug/08/close-call-how-the-pollsters-got-the-election-result-right

Linzer, Drew. 2012. ‘Pollsters may be herding’. VOTAMATIC, 5 November. Available at: www.votamatic.org/pollsters-may-be-herding

Loughnane, Brian. 2015. ‘The Liberal campaign in the 2013 federal election’. In Carol Johnson and John Wanna with Hsu-Ann Lee (eds), Abbott’s Gambit: The 2013 Australian federal election. Canberra: ANU Press, pp. 191–201. doi.org/10.22459/AG.01.2015.11

MacCallum, Mungo. 2002. How to be a Megalomaniac or Advice to a Young Politician. Sydney: Duffy and Snellgrove.

Mackerras, Malcolm. 1972. Australian General Elections. Sydney: Angus and Robertson.

Madigan, Dee. 2014. The Hard Sell: The Tricks of Political Advertising. Melbourne: Melbourne University Press.

Martin, Peter. 2016. ‘Poll was on the money, but our money was on the Coalition’. Sydney Morning Herald, 4 July.

Massola, James. 2016. ‘Nutt falters in awkward showdown’. Sydney Morning Herald, 23 September.

Mills, Stephen. 2015. ‘Parties and campaigning’. In Narelle Miragliotta, Anika Gauja and Rodney Smith (eds), Contemporary Australian Political Party Organisations. Melbourne: Monash University Publishing, pp. 115–26.

Mitchell, Chris. 2016. ‘Newspoll closest to the mark on count’. Australian, 4 July.

New South Wales Teachers Federation. 2016. ‘Coalition vulnerable as voters in NSW marginals shift to parties backing Gonski schools’ funding’. Media release: 22 June. Available at: www.nswtf.org.au/files/160622_t_fed_polljune.pdf

Oquist, Ben. 2016. ‘Polling and Senate voting analysis’. The Australia Institute, 10 June. Available at: www.tai.org.au/content/polling-and-senate-voting-analysis

Posetti, Julie, Anika Gauja and Ariadne Vromen. 2016. ‘Yourvote: Decide who to vote for in Australia’s 2016 federal election’. Sydney Morning Herald, 7 June. Available at: www.smh.com.au/federal-politics/federal-election-2016/yourvote-decide-who-to-vote-for-in-australias-2016-federal-election-20160605-gpbvho.html

ReachTEL. 2016a. ‘2016 federal election – polling accuracy’. 14 July. Available at: www.reachtel.com.au/blog/2016-federal-election-polling-accuracy

——. 2016b. ‘7 News – National Poll – 30 June 2016’. 1 July. Available at: www.reachtel.com.au/blog/7-news-national-poll-30june16

——. 2016c. ‘7 News – Longman Poll – 2 June 2016’. 3 June. Online: www.reachtel.com.au/blog/7-news-longman-poll-3june2016

——. 2016d. ‘7 News – Grey Poll – 9 June 2016’. 10 June. Online: www.reachtel.com.au/blog/7-news-grey-poll-9june16

——. 2016e. ‘7 News – Hasluck Poll – 16 June 2016’. 17 June. Online: www.reachtel.com.au/blog/7-news-hasluck-poll-16june2016

——. 2016f. ‘7 News – Chisholm Poll – 30 June 2016’. 1 July. Online: www.reachtel.com.au/blog/7-news-chisholm-poll-30june2016

Roy Morgan Research. 2016a. ‘Election now too close to call: ALP 51% cf. L-NP 49%. Minor Parties “won” last night’s Leader’s debate’. 30 May. Available at: www.roymorgan.com/findings/6831-morgan-poll-federal-voting-intention-may-30-2016-201605300615

——. 2016b. ‘Roy Morgan state of the nation 24: Focus on politics the 2016 federal election’. 16 June. Available at: www.roymorgan.com/findings/6856-roy-morgan-state-of-the-nation-june-2016-201606161810

——. 2016c. ‘Nick Xenophon team looks likely to nick L-NP seats of Mayo and Grey’. 16 June. Available at: www.roymorgan.com/findings/6855-roy-morgan-south-australian-seat-analysis-june-2016-201606161716

——. 2016d. ‘Melbourne’s Batman set to turn Green at Federal Election’. 20 June. Available at: www.roymorgan.com/findings/6858-melbourne-batman-set-to-turn-green-at-federal-election-july-2016-201606171633

Shanahan, Dennis. 2016. ‘Newspoll election tips on the money’. Weekend Australian, 16–17 July.

Street, Andrew P. 2016. The Curious Story of Malcolm Turnbull: The Incredible Shrinking Man in the Top Hat. Sydney: Allen & Unwin.

Studlar, Donley T. and Ian McAllister. 1994. ‘The electoral connection in Australia: Candidate roles, campaign activity and the popular vote’. Political Behavior 16(3): 385–410. doi.org/10.1007/BF01498957

Tadros, Edmund. 2016. ‘Fairfax/Ipsos poll gives most accurate result (so far)’. Australian Financial Review, 4 July.

Walsh, Christopher. 2016. ‘An independent poll shows Solomon MP Natasha Griggs will struggle to retain her seat at the federal election’. NT News, 27 June. Available at: www.ntnews.com.au/news/northern-territory/an-independent-poll-shows-solomon-mp-natasha-griggs-will-struggle-to-retain-her-seat-at-the-federal-election/news-story/187faae4f0bcbcbe0d2927227cec9b01

Wikipedia. 2016. ‘National opinion polling for the Australian federal election, 2016’. Wikipedia. Available at: en.wikipedia.org/wiki/National_opinion_polling_for_the_Australian_federal_election, 2016


1 Nielsen had decided that the publicity value of running a poll was no longer worth the cost.


Previous Next