How did the 2016 EU Referendum come about?

Cara H, Editor of Unconquered Peaks, looks at the key reasons that led David Cameron to hold the 2016 EU referendum.

In this essay I focus on the factors which led to the 2016 Referendum being held, rather than the result. David Cameron called the 2016 EU Referendum on the UK’s membership of the European Union (EU) in 2015, giving the British public the right to decide whether their future would be in or out of the EU. They chose to leave the EU by a margin of 51.9% leave, versus 48.1% remain. The UK-EU relationship has always been complicated and fraught, ever since joining in 1973. Factors analysed are ‘important’ as they led to Euroscepticism in British politics or the British public, and/or led to political pressure on Cameron to hold a referendum on EU membership.

I argue that the UK’s historic relationship with the EU contrasts sharply with their current aims. As for immigration, general anti-immigration sentiment, and the rise of UKIP (which are very much linked) strongly contributed to Euroscepticism and political pressure on Cameron. I also touch on Cameron himself, and his decision making around quelling his backbenchers.

A transactional vs political relationship

Britain has always viewed the EU differently to our European friends. Whilst most of Europe see themselves as European, Britons are the least likely to have Europe form part of their identity (see graph below), and do not have the same allegiance to Europe in comparisons to the German or French. Instead, we view our relationship with the EU as transactional, through a cost-benefit, economic analysis. This can be clearly traced back to our original reasons for joining.


(Eurobarometer, 2015)

In the late 1950s, Britain was experiencing a post-war economic rut, while Germany and France were experiencing strong growth. Britain’s spheres of influence were declining, and trade with the USA and Commonwealth had decreased. This led to the belief that joining the bloc might remedy the UK’s economic problems. Macmillan, the UK Prime Minister at the time, “saw the European Community as an economic panacea… here was a way in which the British economy could overcome so many of its problems without resorting to a radical and painful domestic economic overhaul” (Holmes, n.d.) This analysis of Britain’s reasons for joining contrasts sharply with the EU’s increasingly political aims. Though Britain arguably shares the aims of the European Project, it does not share the same desire to become one with Europe and is interested in the EU only economically. Having joined the EU for economic reasons, and later being faced with political integration, increased tensions.

These tensions between an economic, free trade-based union and a political integratory one, have been the backdrop of the UK’s interactions with the EU. For example, the Eurozone Crisis in the UK especially damaged views towards Europe, not simply because of what happened, but because the ‘cost’ of remaining a member became highlighted. The heightened tensions within the political establishments of the UK and the EU have seeped into the general public psyche. Therefore, the dual nature of the EU as a trade-bloc and a political union had a negative impact on the UK’s relationship with the EU, by increasing Euroscepticism, and in turn increasing political pressure on Cameron to hold a referendum in 2016.

Immigration concerns conflated with EU

Freedom of movement is enshrined in the EU’s ‘DNA’. As stated in 1957 in the Treaty of Rome, it can be defined as ‘EU nationals having the right to move freely within the European Union and to enter and reside in any EU member state’ (Bundesministeriums des Innern, 2015). Non-EU immigration levels have always been higher than EU immigration levels. Meaning that the argument around freedom of movement as a cause of unsustainable immigration has been greatly exaggerated. It is the perception of EU immigration that has stuck; the EU became synonymous with immigration of any kind, whether this is misguided or not.

The increased level of non-EU and EU immigration put pressure on aspects of British culture which are not so open to those perceived as ‘non-British’. Integration is often difficult for those of a different culture. For example, differences in language, traditions and skills, can lead to those with a strong sense of British national identity perceiving immigrants negatively, as they threaten what some see as British culture. And yet this immigration concern is incorrectly conflated with the EU, as the majority of immigration to the UK has little to do with the European Union (though one could also argue that all British anti-immigration sentiment is largely unfounded, regardless of the place of origin). An excellent paper by Chatham House presents a cross analysis of people’s voting choices (leave vs remain), compared to their attitudes towards immigration (both non-EU and EU). The trait that most divided the ‘leavers’ from the ‘remainers’ was their attitudes towards immigration and British culture: nearly ¾ of ‘outers’ agreed that ‘Immigration undermines British culture”.


Social background of ‘inners’, ‘outers’ and undecided voters. (Chatham House, 2015)

Therefore, this cultural negativity towards immigration manifests itself in many ways, one of which is opposition to the EU, through the conflation of (any) immigration with EU membership. One of the EU’s most sacred principles is freedom of movement, and the growing number of immigrants since the UK’s membership of the EU has only increased this Euroscepticism, which increased the likelihood of EU-UK referendum.

UKIP’s sudden rise

UKIP was founded in 1991 and can be categorised as a single-issue party, with the sole aim of bringing the UK out of the EU, via a referendum. Once Nigel Farage became leader of UKIP in 2006, it grew in popularity, with gains in the 2013 local elections (22% of the vote), two Conservative Party defections to UKIP in 2013, and impressive results in both the 2014 European Parliament elections (largest number of seats with 24) and the 2015 General Election (12.5% of the popular vote). They were most certainly on the up.


Table to show distribution of seats in the European Parliament in 2014.

UKIP’s rise led to Cameron’s electoral position becoming increasingly threatened: UKIP is a right-wing party, whose voters were more likely to be white and older than that of Labour’s electorate. Therefore, UKIP was able to split the Conservative vote (Martill, 2018). In 2014, UKIP managed to gain over a quarter of votes in European Parliament elections, outnumbering the Conservatives. Understandably, this was a clear threat to the Conservative Party at the time. Though support for UKIP was clearly influenced by other factors, (i.e factors that pushed voters towards UKIP), UKIP managed to harness Euroscepticism in the general public, and transform this into meaningful political pressure on David Cameron to hold a referendum. The nature of UKIP’s rise – sudden, large, and at a time when the Conservatives did not have a majority (pre-2015 General Election), was a very important factor in leading to the referendum. Arguably, UKIP’s pressure on Cameron led him to hold an election, lest he lose public and potentially party support, and inevitably, a general election. Therefore, due to the rise of UKIP, a party based on support for a referendum on the EU, Cameron was incentivised to put a referendum promise in his party’s manifesto in 2015 and hold one in 2016, in order to keep his Conservative Government in office.

Cameron’s desire for a quick fix

The Prime Minister is by far the main source of authority over whether to hold a referendum or not, so analysing Cameron is important in answering this essay’s question. Cameron’s decision around party management was an impactful factor in leading to the 2016 EU Referendum.

The promise of a referendum can be seen as a ‘quick fix’ method of appeasement to the Eurosceptic backbenchers. As is clear from the rise of the Conservative Eurosceptic faction, heightened tensions were forming in the Conservative Party from 2013 onwards, and this threatened the Party’s ability to govern. Hence, Cameron felt compelled to manage his party over Europe, by delegating the decision to the public. When the referendum was initially promised in June 2013, Cameron was concerned with stopping the backbenchers rebelling in the coalition. He wanted to silence the Eurosceptic wing of the party that had caused so much trouble for the party over the years; an ‘easy fix’ to a longstanding problem (Martill, 2018). A comment that encapsulates this, is from Donald Tusk (former President of the European Council), recounting his meeting with Cameron after the referendum was announced in 2013:

“Why did you decide on this referendum, [Tusk recounts asking Cameron this] – it’s so dangerous, even stupid, you know, and he told me – and I was really amazed and even shocked – that the only reason was his own party… [He told me] he felt really safe, because he thought at the same time that there’s no risk of a referendum, because his coalition partner, the Liberals, would block this idea of a referendum” (BBC, 2019).

Clearly, party management was very influential in Cameron’s decision-making. Therefore, the decision desire to repair the divide in his party, was hugely impactful in leading to the 2016 EU Referendum.

In conclusion, the nature of our relationship with the EU, immigration sentiment, UKIP and Cameron’s decision making were the most important factors in leading to the EU Referendum. Especially impactful was UKIP’s ability to harness Euroscepticism into political pressure. But arguably, the end of our EU membership was spelt out from the beginning.


Works Cited

BBC, 2019. Inside Europe: Ten Years of Turmoil. [Online]
Available at: https://www.bbc.co.uk/programmes/b0c1rjj7
[Accessed 29 06 2021].

Bundesministeriums des Innern, f. B. u. H. B., 2015. Freedom of movement. [Online]
Available at: https://www.bmi.bund.de/EN/topics/migration/law-on-foreigners/freedom-of-movement/freedom-of-movement-node.html
[Accessed 29 06 2021].

Chatham House, 2015. Britain, the European Union and the Referendum: What Drives Euroscepticism?. [Online]
Available at: https://www.chathamhouse.org/sites/default/files/publications/research/20151209EuroscepticismGoodwinMilazzo.pdf
[Accessed 9 20 2021].

Eurobarometer, 2015. National versus European identification, s.l.: s.n.

Holmes, M., n.d. The Conservative Party and Europe. [Online]
Available at: https://www.brugesgroup.com/media-centre/papers/8-papers/807-the-conservative-party-and-europe
[Accessed 9 20 2021].

Martill, B., 2018. Brexit and Beyond: Rethinking the Futures of Europe. London: UCL Press.

Coronavirus and the economy

Calculator and pen economics

Lily in Year 13 wrote this article just before the start of Lockdown 2 in November 2020 in the UK. As we now gradually come out of Lockdown 3 some 7 months later, how much of the article rings true?

As of the 23rd March 2020 the UK was placed under lockdown and has been moving in and out of lockdowns and restrictions ever since. This is likely to cause an economic slowdown and possibly plunge the UK into recession over the coming years because when people are in lockdown consumption and aggregate demand in the UK is likely to fall. There will be impacts on workers and the UKs supply of goods domestically and from abroad which is also likely to negatively impact the economy. Overall, the UK and its economy is likely to suffer as a result of Covid-19.

Possible impacts

Firstly, aggregate demand in the UK will be hit by the virus as when people are quarantined they will be unable to go out and spend, especially as recently all shops selling non-essential goods along with bars and restaurants have been told to close meaning this spending in these sectors won’t be possible anymore. This will be added to by the low consumer confidence that is currently present as in uncertain times people save their money as a safety net so they will be able to support themselves in uncertain times.

Consumption patterns will also change, meaning goods with an income elasticity of demand between 0 and 1 are likely to see little change in demand as these goods are classed as necessities. Some might even see a rise in demand in the short term as people panic buy (think back to scenes of toilet roll panic buying in April 2020).

However, goods with an income elasticity over one are likely to see a decrease in demand as these are classed as luxuries so people won’t be prioritising purchasing these items when economics conditions are uncertain. This means that overall consumption will decrease as people don’t have the opportunity to spend as much out shopping or on luxuries and are also likely to be more cautious with their spending by nature. This fall in consumption will also be exacerbated if people’s incomes are negatively impacted, such as if they were previously working in the gig economy, perhaps on zero hour contracts or with irregular, situation-based income.

These people would be relying on their savings or help from the government for their money, meaning their spending and consumption is likely to fall. To encourage more spending the bank of England has dropped interest rates to a new record low 0.1% to encourage people to go out and spend instead of save as they will be receiving very little gain from letting their money sit in a bank account. Also, as if demand falls whilst there is still a constant level of supply prices of goods and services are likely fall. This can be seen through the drop in the price of flights as airlines suffer a shortage in demand. For example a flight from California to London would have previously cost $1000 can now be purchased for as low as $246 dollars. However, this drop in prices is likely to have a limited effect on demand because of the uncertainty currently which also means low interest rates are likely to have a limited effect in changing people’s behaviour. Recent travel bans will also limit the impact lower prices in the airline industry since people are unable to take advantage of these lower prices when they are unable to travel or will be deterred by quarantine times. This means that overall aggregate demand and consumption in the UK is likely to fall because there will be fewer opportunities for people to spend but mostly because of consumer uncertainty. This will negatively impact the UK economy.

Aggregate supply in the UK will also be impacted by the virus, impacting costs along with exports if less is being produced domestically. A fall in domestic supply could result in cost push inflation as if demand levels for some products remain steady prices would have to rise to make up for the limited stock or production due to covid-19 restraints on supply. This fall in supply could be caused if workers have to self-isolate and cannot go to work, meaning a company cannot operate at full capacity causing a shift inward on their PPF.

Companies might also have trouble receiving stock from other countries if their production has been impacted by the virus which could prevent production in domestic businesses. This is likely to hit the manufacturing industry especially hard as these rely on parts from abroad, such as the car company Jaguar which is now running out of parts it would usually ship from China[1], and people coming to work as it is very difficult to work from home if you worked in a factory. This effect on supply could be decreased depending as the UK has a flexible labour market, meaning people can easily move from job to job meaning businesses won’t be hit as hard by the shock as resources (people) can be reallocated more easily compared to France where the labour market is very inflexible.

This effect on supply also depends how flexible the product markets are as if a company could switch supplier there would be minimal effect especially if this new supplier was located domestically instead of abroad. There might also be a time lag if producers had stockpiled meaning it would take longer to run out of stock however is still likely to happen in the long term. In the long run this outbreak could cause LRAS curve to shift inwards if there is less investment as companies will be working harder to keep afloat rather than investing or spending on R&D meaning less will be invested into the countries long term productivity.

This can be seen though the fall in the share prices of the FTSE 100 as on 9th March 2020 the average prices of shares fell 8%[2], the worst day since the 2008 financial crisis with £144bn wiped off its combined value. These top 100 companies were likely to be some of the largest investment spenders. Although, as the UKs economy is mainly based on financial services, many can work from home meaning domestic supply issues may not affect the economy as much as countries that rely heavily on manufacturing such as China and Germany shown as Chinese economy shrank by 6.8% in the first quarter of 2020, the first contraction since 1976. The government could also reduce the effect on domestic supply by subsidising companies so they are able to invest in the technologies they need and keep production lines running, or as interest rates are low companies could take out a loan to invest. Meaning there are options for companies to try and uphold supply and investment. However, there is still the underlying issue of people not being able to go to work because of self-isolation. This will have a huge impact on supply in the UK and therefore the economy as if products aren’t being made they cannot be consumed and if they aren’t being consumed profits will fall leaving companies to have less money to invest, impacting supply in the short term along with productivity in the long term.

Overall, the impact of the virus is going to be wide-ranging across the world and in the UK, impacting both supply and demand. Impacts of the fall in supply in the short term will be slightly counteracted by the fall in aggregate demand as if both curves shift inwards there will be a new equilibrium point of production and cost push inflation is likely to be limited. However, the nation’s productivity and output will decrease which means the UKs GDP is likely to fall significantly, plunging the UK into a recession. The British government are spending more to combat some of the impacts however this is unlikely to cover the full economic impact and will see a rise in the government budget deficit as a result. This also makes it likely that we will see the government to following a policy of austerity over the coming years, meaning people in the UK and the UKs economy are likely to be hit hard by this crisis. However, as the virus is a global pandemic, it is likely that its impact will also be mirrored across the rest of the world.[3]

2021 Update

The UK Government introduced the furlough scheme to support workers and businesses who were unable to run as normal owing to the impact of the virus. Up to April 2021 this has cost over £61 billion[4], with 4.7 million jobs impacted. Government spending is at the highest figure ever seen outside of periods of war.[5]

With increasing numbers of the population having now been vaccinated against the virus, and the recently announced reopening of restaurants from 17 May 2021[6], there is a feeling that we are gradually moving out of the crisis and that normality is becoming closer.

However, the vast spending seen since the crisis started in January 2020 will almost certainly mean that a return to a pre-pandemic life will be a challenge and that further austerity will be required to settle the books. The need to ‘level up’ the country, as announced by the Prime Minister on 11 May 2021[7], will be central to life as we know it for many years to come. Whether this can be achieved will be subject to criticism and debate for the significant future.


[1] See https://www.independent.co.uk/news/business/news/coronavirus-jaguar-land-rover-suitcases-supply-chain-factories-china-a9343336.html

[2] See https://www.theguardian.com/business/2020/mar/09/ftse-plunges-to-below-6000-amid-global-coronavirus-sell-off-oil#:~:text=9%20March%202020%20Fears%20of,into%20an%20official%20bear%20market.

[3] See https://www.ons.gov.uk/economy/grossdomesticproductgdp/articles/coronavirusandtheimpactonoutputintheukeconomy/december2020#:~:text=6.,declined%20by%209.9%25%20in%202020.&text=GDP%20measured%20by%20the%20output,growth%20of%201.4%25%20in%202019. The impact on the UK economy is a 9.9% fall in GDP over the course of the year.

[4] Table and reference from https://www.statista.com/statistics/1122100/uk-cost-of-furlough-scheme/

[5] See https://www.bbc.co.uk/news/business-52663523

[6] See https://www.telegraph.co.uk/politics/2021/05/12/covid-lockdown-roadmap-new-rules-may-17-dates-when-end/

[7] See https://inews.co.uk/news/politics/queens-speech-2021-boris-johnson-pledges-to-harness-spirit-of-lockdown-as-he-sets-out-uks-covid-recovery-995558

Impact study: the spread of imported disease in Australia and New Zealand

Sophia (Year 13) looks at how European colonialism spread disease to Australia and New Zealand.

Although the tragedies brought by actions of colonisers such as slavery, wars and other abysmal treatment of native populations caused many deaths, one of the biggest killers of this action was the introduction of new diseases to which natives had no immunity due to their previous isolation from the European invaders.

Image from Pexels

Between 1200 and 1500 Europe itself was suffering several pandemics due to the growth of unsanitary cities, creating the perfect environment for infection, and also increasing contact with the Old World, such as through Mongol and Turkish invasions, which exposed Europe to major disease outbreaks. For example, between 1346-51, the Black Death killed off about a third of Europe’s population. As relatively disease-hardened populations in Europe emerged from this, although local epidemics emerged after 1500, none were really as bad as the years before it, rather such epidemics after this date were in colonised nations. Here I will focus on the colonisation of Australia and New Zealand, with different native peoples (the Aborigines and the Maoris) and with different effects of diseases.

New Zealand

Imported diseases began to impact many Maori from the 1790s. These diseases were those such as viral dysentery, influenza, whooping cough, measles, typhoid, venereal diseases, and the various forms of tuberculosis. Missionaries and observers reported massive death rates and plummeting birth rates. However, unlike the Americas and Australia, there is a big chance that the deaths as a result of foreign disease are widely exaggerated.

Rather, such exaggeration labelled the Maori as a dying race (a view which persisted to 1930), which helped to project the British Empire into New Zealand in 1840. One of the reasons for which the effect of disease was probably the smallest was simply the distance from Europe to New Zealand; it was a natural quarantine. The trip took 4 months or more, meaning that the sick either died or recovered; either way they were often no longer infectious on arrival. Therefore, the most pernicious European diseases – malaria, bubonic plague, smallpox, yellow fever, typhus and cholera – did not manage to transfer to New Zealand.

Image by Dan Whitfield via Pexels

Another factor which fostered the gross magnification of the demise of the Maori was the comparison in birth rates; missionary families were extremely large – the fourteen couples who went to New Zealand before 1833 had 114 children. Therefore, it was easy to amplify the decline in Maori birth rates into something far more serious than it was. The population of Maori on contact with the Europeans are very unreliable and, in most cases, wild guesses, and also allow for the misjudgement of the effect of the disease. For example, one estimate for 1769 based upon archaeological science gives an estimated pre-contact birth rate of 37 per thousand per year, and a death rate of 39[1], obviously impossible given that it leaves the Maori population in the minus-thousands. However, more moderate calculations suggest an average decline of 0.3% per year between 1769 and 1858[2]. Therefore, although the Maori population somewhat suffered as a result of these diseases, there is a tendency to exaggerate this, to portray them as ‘weaker’ peoples, and a dying race, allowing for easier colonisation.

Australia

Although Australia was initially discovered by the Dutch, it was a fleet of British ships which arrived at Botany Bay in January 1788 to establish a penal colony[3].  European disease spread to areas of Australia, even before Europeans had reached those parts. For example, there was a smallpox epidemic near Sydney in 1789, wiping out around half of the Aborigines there.[4] 

Photo by Damon Hall from Pexels

Some historians claim that this was acquired through contact with Indonesian fishermen in the far north, which then spread, and others argue that it is likely that the outbreak was a deliberate act by British marines when they ran out of ammunition and needed to expand their settlement. Indeed, unfortunately colonial thinking at the time placed Europeans as the ‘superior race’; a book written by William Westgarth in 1864 on the colony of Victoria included: ‘the case of the Aborigines of Victoria confirms…it would seem almost an immutable law of nature that such inferior dark races should disappear’. Therefore, as with New Zealand, description of the natives as a ‘dying race’ was an important tool for colonisation, meaning purposeful introduction and spread of some diseases is not too hard to believe.

Smallpox spread between Aboriginal communities, reappearing in 1829-30; according to one record killing 40-60% of the Aboriginal population.[5]  In addition, during the mid-late 19th century, many Aborigines in southern Australia were forced to move to reserves; the nature of many of these institutions enabled disease to spread quickly and many even began to close down as their populations fell.

Conclusion

Although one must be wary of statistics about native mortality rates in both countries, given the European tendency to exacerbate the decline in native populations, it is fair to say that the decline in Aboriginal populations was much higher than that of the Maori in New Zealand, although wars also massively contributed to this.

While roughly 16.5% of the New Zealand population is Maori, only 3.3% of Australians are aboriginal, and it is safe to say that disease influenced this to some extent. So why was there such a difference between the effects of diseases in these countries, seemingly close together and both colonised by the British? A very large reason was smallpox; this was by far the biggest killer in Australia, but never reached New Zealand. The nature of the existing native communities was also important; there were 200-300 different Aboriginal nations in Australia, all with different languages, but the Maori were far more united, and often seen to be a more ‘advanced’ society, and therefore were never forcibly placed into reserves; which is where a lot of the spread of disease took place.

In addition, events in New Zealand occurred much later than Australia, after slavery had been outlawed, meaning there was a slightly more humanitarian approach, and there is less evidence to suggest purposeful extermination of the Maori. This is not to discount any injustices suffered by the Maori; indeed, many did die from European disease, and in both cases the native populations were treated appallingly and were alienated from their land.

The influence of European disease was overwhelmingly more powerful in Australia. However, one must approach statistics about the effect of disease on native peoples with caution, as Europeans tended to exaggerate this area to portray such peoples as ‘dying races’, a device often used to support colonisation.


Bibliography

Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991

James Belich, Making Peoples (New Zealand: Penguin Books), 1996

John H. Chambers, A Traveller’s History of New Zealand and the South Pacific Islands (Great Britain: Phoenix in association with the Windrush Press), 2003


[1] cited in Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991, 35

[2] Ibid, 56

[3] wikipedia cites Lewis, Balderstone and Bowan (2006), 25

[4] Judy Campbell, Invisible Invaders: Smallpox and other Diseases in Aboriginal Australia 1780-1880 (Australia: Melbourne University Press), 2002, 55

[5] wikipedia cites Richard Broome, Arriving (1984), 27

A short introduction to foreign aid

Alex in Year 13 writes a short introduction to foreign aid, highlighting some of the successes and problems that can appear from the charitable act of giving.

Instilled in us from a young age is the principal that we should help those who are in extreme need. And what could be simpler? From charity mufti days and bake sales, this theory underpins social behaviour in our modern day. It is indeed this principal that drives support for foreign aid.

In the words of Roger Riddell, ‘The belief that aid is a ‘good thing’ is sustained by the assumption that the resources or skills that aid provides do make a difference to those being assisted’. However, the impact of such aid on recipient countries is not always as positive as it may initially appear.

As the effects of climate change enhance the frequency and severity of natural disasters, we often see foreign aid expenditure in an emergency form. Altruism of this kind is uncontested as in the short-term these humanitarian responses are overwhelmingly positive. However, it is with sustained aid that potential problems arise.

Overtime, foreign aid has expanded from small beginnings to become a large and complex global enterprise. Development cooperation (as foreign aid is also called) is now established as an integral part of international relations, with many donor countries contributing at a UN target rate of 0.7% of their gross national income. For the UK, this sum stood at £14.6 billion in 2018. As can be seen from the charts below, few countries meet this target.

Foreign Aid Graph
Net development assistance by country (total million US$ in 2015)

 

However, if we compare this data above with looking at this giving as a percentage of GDP, a rather different picture emerges:

 

For many people, this huge economic contribution to foreign aid and development is a triumph in the world of humanitarianism and society as a whole. An ethical theory linked closely with the topic of foreign aid is utilitarianism. To put it simply, this is the notion that ‘moral life should be guided by the objective of trying to achieve the maximum happiness for the greatest number of people.’

As stated by Paul Streeten, ‘a dollar redistributed from a rich man to a poor man detracts less utility than it adds, and therefore increasing the sum total of utility’. This argument is comprehensive and easy to wrap your head around, which explains why foreign aid is so often short sightedly seen as a win-win situation.

Unfortunately, this is not always the case. There has been evidence of several key factors that can inhibit the aggregate impact of foreign aid.

The first problem arising with aid is its potential for misuse. Additional resources in the hands of potentially corrupt governments are significant impediments to optimum utilization of funds. This is because the fungibility of aid could enable the financing of non-developmental projects against the interest of the population. Hence, aid itself has, in some cases, the perverse ability to create negative effects on recipient economies.

Secondly, there are limits associated with aid and a country’s absorptive capacity. As the volume of aid increases, it is subject to diminishing marginal utility. In basic terms, the effect is as if I gave you one chocolate bar that you enjoyed consuming. And perhaps a couple more wouldn’t do you any harm… but once I’ve given you 100 chocolate bars, each individual bar’s worth has decreased along the way. In this way it can be seen than after a certain point (called the absorptive capacity threshold), providing more aid becomes completely ineffective.

Finally, fluctuations in aid inflows are external shocks to vulnerable economies, which plan expenditures based on promised aid commitments. When a highly dependent country’s aid is not given in full, this can damage future growth prospects significantly.

From all this, we can gather that the future of aid-giving and its associated policies may need modifying to ensure aid is given and used in the most efficient and appropriate ways possible, enabling it to help those who are most in need.

Is geothermal energy the answer to our climate problems?

Lucy in Year 10 looks at issues surrounding climate change and the damage our current ways of living are having on the planet. Might geothermal energy offer the UK, and the world, a solution for us to clean up our act?

We are in the midst of a climate crisis; the UK government has recently made a commitment to achieve net zero emissions by 2050 to help stop further damage to the environment. The burning of fossil fuels to generate power is a significant contributor to the UK’s greenhouse gas emissions, so the use of renewable energy sources is critically important to meeting this commitment to achieve net zero emissions. There are already many established sources of renewable energy, such as wind, solar and tidal power, but geothermal energy might be an unexpected solution to the UK’s problems.

Geothermal energy: a solution to a cleaner future?
Picture from https://www.britannica.com/science/geothermal-energy

Geothermal energy uses the natural heat from within the Earth’s crust to heat water and create steam.  This steam then powers a turbine in a similar way to the production of energy using fossil fuels, with the key exception that the heat comes from the earth instead of from the burning of coal, oil or gas.  So, like other forms of renewable energy, geothermal energy produces far less CO2 than fossil fuels do.

The key advantage geothermal energy offers over many other forms of renewable energy is consistency.  Solar cells and wind turbines rely on climate and weather conditions to operate, which means that the amounts of energy produced varies and can be unreliable.  Geothermal energy doesn’t have that problem. No matter what happens, a geothermal plant will always produce the same amount of energy. The problems caused by inconsistent energy provision have already been seen; only weeks after setting a new wind power generation record, a breezeless day in January 2021 resulted in a shift back to fossil fuelled power and a tenfold surge in spot energy prices.[1]

Geothermal energy is currently in the news due to a recent announcement to build the first ever geothermal plant in the UK, in Porthtowan, Cornwall.  It will produce enough energy to power 10,000 homes[2] – enough to power almost all of Birmingham. So, why don’t we build them everywhere?[3]

While geothermal energy does have significant benefits, it also comes with its own set of problems.  The most prominent of these is the very specific characteristics of the Earth’s crust needed to be able to superheat the steam and power the turbines. As opposed to somewhere like Iceland, on the boundary of a tectonic plate, these locations are few and far between in the UK. Some will unfortunately be located in populous areas, where the negative aesthetics of a power station would outweigh its benefits. Another worrying fact about geothermal plants is that their construction, and the drilling of geothermal wells into the earth’s surface, have been the cause of several earthquakes over the past decade (5.5 magnitude earthquake in Pohang, South Korea in 2017).  While this is less of a risk for the UK, being geologically more stable, it still is a factor to be considered. I would hasten to add that this risk is less than that of CO2 from fossil fuels or the toxic clean-up of a nuclear power station!

While geothermal energy plants are undoubtedly an effective and positive use of the Earth’s natural resources to create a sustainable and consistent supply of energy, the problems that their construction and capabilities raise mean that it would be impossible for them to become the sole provider of the UK’s energy. However, it is undeniable that their existence and use could aid the UK greatly in our battle against greenhouse gases and the climate crisis. While geothermal energy cannot solve the climate problem alone, it should definitely be a part of the UK’s, and the world’s, solution to the threat that is the climate crisis.

 


REFERENCES

[1] https://www.thetimes.co.uk/article/the-energy-answer-is-not-blowin-in-the-wind-xbntdm6pv

[2] https://www.newscientist.com/article/mg24032000-300-supercharged-geothermal-energy-could-power-the-planet/

[3] Check out https://cornishstuff.com/2019/09/11/successful-drilling-at-uks-first-deep-geothermal-plant-in-cornwall/ to see the new Geothermal Plant take shape

 

Why are Belgian politics so complicated?

This week’s WimLearn post is an extract from Hannah B’s EPQ about Belgium’s political system.

According to the Belgian constitution, citizens of this European country have the right to freedom of language, since its independence on 4th October 1830, and can, therefore, choose which language to conduct their daily lives in. Article 30 states that ‘only the law can rule on matters involving language, and only for acts of the public authorities, and in legal matters’ (Vermeire Elke, Documentation Centre on the Vlaamse Rand, 2010). The freedom of language for citizens also complicates political matters, in which national polling occurs because votes from both language-speaking sides must be collated and moderated for a fair system.

Additionally, article 4 states that “Belgium has four linguistic areas: The French-speaking area, the Dutch-speaking area, the bilingual area of Brussels Capital and the German-speaking area.” Around 55% of the Belgian population belong to the Flemish community, whilst 40% belong to the Walloon community, and just 1% to the German Community. However, 16% have Dutch as their second language, whilst 49% have French as their second language. Overall, this means that for the national government, ratios must be put in place to ensure that one linguistic group does not outweigh the other on the basis of their population.

Above: Image from https://brussels-express.eu/wacky-world-belgian-politics/

Over the past 20 years, Belgium has not seen much political stability, largely due to their language divide. Belgium has a multi-party system, which means that political parties are often required to form coalition governments with each other. An issue that immediately arises when a coalition government must form is the parties’ cooperation.

In Belgium, this is made difficult by the languages that the two sides speak. Before political decisions are even made, the efficiency of the decision, that is who should form a coalition, is hindered. Whilst the regions are able to communicate with each other, both sides have preconceptions, and therefore hesitations to working together. These doubts are supported by the fact that, previously, Belgium has been without a government for 541 days, due to disagreements. The affect the language divide has on cooperation is seen here.

The fact that instant interpretation is often required would imply that the reason for Belgium’s political instability is due to their language divide, however this is not the case. There are 43 administrative arrondissements, which are an administrative level between the municipalities and the provinces. Each party must form a list of candidates for each of these arrondissements.

Arrondissements are split so that Flemish-speaking and French-speaking citizens will not fall under the same one. Many rules surrounding the use of language are put in place to minimise disagreement, and regional superiority. During political campaigns, there are restrictions on the use of billboards, and they only last for around one month. In national politics, politicians can choose to speak any of the three official languages, and the parliament will provide simultaneous interpretation. In this case, it would suggest that language is not the issue, but, instead, conflict of interest. All other official correspondence, such as tax returns, or passport requests, must be conducted in the official language of the region. At the age of 18, all citizens are automatically placed on the electoral roll, and are subject to compulsory voting.

Celebrating the first year of PPE at Wimbledon High

Ms Suzy Pett, Assistant Head (Teaching and Learning) at WHS, looks back at the end of the first year of the new PPE course studied by Year 10 pupils at WHS.

We are at the end of the inaugural year of our PPE course. We wanted students to look outwards and question the ideologies – political, economic, philosophical – that are influential in shaping our world. One of our school’s key objectives is for each student to “stride out’ and be prepared to “shape the society in which she lives and works’. Our PPE course has certainly helped our Year 10s become savvy and robust thinkers on important global, national and personal issues.

The course ended with students writing their own articles on a topic of their choice. The array of interests was kaleidoscopic! Articles ranged from Kantianism vs Utilitarianism; to immigration; to beauty; to Plato; to student loans; to voting…to Trump…and everything in the middle (including, of course, the impact of Coronavirus). There is no doubt that students have developed mature, thoughtful and increasingly bold voices on these matters. Their articles were hugely impressive.

Here is a small selection for you to enjoy:

Izzy S – Successes of the language of populism

Jasmine H – Student Loans: Friend or Fraud?

Amy C – ‘If Walls could talk!’ – What we can learn from the first modern artist about the value of isolation to our ability to express ourselves

Bella R – Your President

Marianne-K-PPE-Project

 

Does Great Britain need to move on from the Second World War?

Rosie, Year 11, shares her recent WimTalk with us, discussing issues surrounding the way Britain remembers its past to shape its future.

September 2nd, 1945, Tokyo Bay. On the deck of the American battleship USS Missouri, the Japanese Instrument of Surrender document was signed by representatives from Japan, the United States, China, the United Kingdom, the USSR, Australia, Canada, France, the Netherlands, and New Zealand. World War Two was officially over. This ceremony aboard USS Missouri lasted 23 minutes, and yet the impact of what it represented rings on to today, almost 75 years later.

Now, in 2020, Great Britain has not moved on the Second World War – far from it. Everywhere in Britain, wartime memorials and museums can be found, remembering the half a million soldiers and civilians who lost their lives. Most British people have relative who fought in or experienced the war, and there are few who would not recognise the phrase ‘We shall fight on the beaches’ from Churchill’s most famous speech. And this prominent remembrance is not just confined to the older generations: It is an integral part of every child’s education too. Hundreds of books, TV programmes, podcasts and films have documented the war with great success – even recently. The modern economy, too, remembers the war, with Britain making the final war loan payment to the United States only 14 years ago in 2006. Overall, the memory of the Allied victory in the Second World War – “our Finest Hour” – inspires the national sense of pride in our military history that has become a rather defining British characteristic.

But the question is: why does Great Britain cling on to the Second World War more than any other nation involved? And is this fixation justified, or is it time to move on?

One perspective is that the British viewpoint of the Second World War is bound to be different because of geography. The triumph of physically small island nation prevailing in war is something we can celebrate and take pride in. For other nations involved – larger landlocked countries with shifting borders – this is less easy. For example, Germans today are less inclined to look back, not only because of the radical changes in society since the Third Reich or lack of a victory to celebrate, but also because modern Germany is physically different to the earlier Germany of the Kaisers, Weimar, Hitler and the divided states of the Cold War. Instead, Germany today looks forward, not backwards, which some would argue has allowed it to become the economic giant on the world stage that it now is.

And that’s another thing – how much has Britain changed since the Second World War? Of course, it has modernised along with the rest of the world: politically, economically, and physically, but so many of the same institutions remain as were present in 1939. Our democratic government, our monarchy, our military and traditions have survived the test of worldwide conflict twice in one century, the collapse of the British Empire and the Cold War in a way that those of France, Spain and Italy have not.

Above: Photo from wikimedia commons

The Second World War was a clear clash of good vs bad – peace vs aggression. Britain was not directly attacked by Hitler but stepped up to honour a promise to defend Poland against invasion for the greater good. Remembering the Second World War makes Britain proud of these national values, as had Chamberlain not roused from his policy of appeasement and committed Britain to the sacrifice of money, empire and life, had Churchill not fortified the nation’s most important alliance with Roosevelt, the world would certainly be a very different place today. And so, if a nation’s psyche comes from the values and institutions it possesses that have stood up throughout history, is it really any wonder Brits take pride in looking back?

On the other hand, perhaps after so many years it’s time to recognise that we are not, in fact, the same Britain that we were in 1945. In 1944, British economist John Maynard Keynes spoke at the famous Bretton Woods conference. He said that the Allies had proven they could fight together, and now it was time to show they could also live together. In achieving this, a genuine ‘brotherhood of man’ would be within reach. At this conference, the IMF and World Bank were created, soon followed by the UN, to promote peace and prevent the kind of economic shocks that led to war in the first place. But at the same time, these organisations were a convenient way for the main Allied powers to solidify their power and privileges. Since then, a European has always headed the IMF, and an American the World Bank. The UN Security Council is dominated by the five permanent members, whose privileged position, some say, is nothing but a throwback to the power distribution on the world stage of 1945. By clinging on to the war, are we really clinging on to the idea that Britain is still a leading power, and modern economic giants such as Germany and Japan do not deserve to disrupt the power structure of 1945? We pour so much money into Britain’s defence budget to maintain this powerful status – into remembered threats and sometimes archaic strategies: submarine warfare, aerial dogfighters and manned bombers. The Second World War was certainly a catalyst for change across the globe. Perhaps now, Britain’s inability to let go of these old power ideals and designated roles of nations prevents us from achieving the ‘brotherhood of man’ that, in 1944, Keynes dared to dream of.

We are told that the value of history is to ‘learn a lesson’ to prevent us from repeating the same mistakes again. But there is an argument to say that this concept is a consistent failure. So many conflicts around the world seem to be caused by too much remembering: refreshing tribal feuds, religious division, border conflicts, expulsions and humiliations. Doesn’t remembering cause Sunni to fight Shia or Hindu to fight Muslim? Is it memory that maintains dispute in the Balkans, the Levant, Mesopotamia? Perhaps the emotion sparked by remembering the details of our past is better left in history when it has the capability to spark aggression, conspiracy theories and irrational anger. Today’s politics of identity seem provocative enough without being fuelled by history, so perhaps we should heed Jorge Luis Borges who wrote: ‘The only vengeance and the only forgiveness is in forgetting’. This advice has been proven to work over time – Nelson Mandela’s philosophy in 1990s South Africa was to focus on ‘truth and reconciliation’ and draw a line under his country’s recent history – closure. Can Britain not find closure on the 20th century?

What I can conclude is that there are two perspectives to take on this statement: there are some who hold onto our history as a lesson for the future, as a reminder of the importance of peace and action for the greater good, who will never be able to forget the Second World War because of the core British values that it represents. And then, there are those who think it is time to let go of the past, and adapt our nation’s values to suit our current position in the quickly-changing world that we live in. And so, the only question I have left to ask is: which are you?

Did the Great Depression influence the response to the 2008 Financial Crisis?

Lauren, Year 13, discusses whether the Great Depression influenced the response to the 2008 Financial Crisis.

During the Great Depression, wages were cut for workers which led to a reduction in demand. This stemmed in the bankruptcy of thousands, as the stock market went into free fall after the Wall Street Crash.  Between 1929 and 1932 more than 100,000 businesses went bankrupt, and around 11,000 banks stopped trading. When these banks shut down, savers lost all of their money so they could no longer buy consumer goods. This reduction in demand resulted in the redundancy of many workers, ultimately creating a further decline in the level of aggregate demand. Thus, the economy entered a downward spiral.

President Hoover interpreted the Depression as hypothetical notion, a normal business turndown, rather than a solidified and evidenced occurrence.  Consequently, when an attempt to take action was made it was a little too late. Following the Great Depression, regulations were altered, and economic policies restructured all across the world. The economic system was redesigned to avoid a repeat of this disaster and the levels of government spending were increased.

After the Great Depression, it was often assumed that there would not be another economic downturn of such major proportions as it was believed that the lessons that had been learned then could be applied to any future crisis, protecting the future from such economic turmoil. However, perhaps the lessons learned were not enough to ultimately fend off the Financial Crisis of 2008 in which there had been a rise of non-bank institutions which were not regulated to the same extent as commercial banks, concerning loans.

Several measures were put into place in order to alleviate the effects of the crisis. In the USA, loans from the Federal Reserve were enforced, and the US even tried a Keynesian fiscal stimulus in early 2008 to ‘jump-start’ the economy, but this wasn’t successful enough because the stimulus was too small, at only about 1% of GDP.

A matter of interest to many economists is how the crisis was dealt with in the UK under the Labour government, because they continued to spend significantly in the immediate aftermath of the Financial Crisis. This helped to ease the initial impact because it reduced the economic downturn, but the rate at which the national debt was shooting up was dangerous.


The coalition government slashed public spending after 2010, damaging public services and holding back economic recovery after the crisis. Although it would have been wrong to ignore the huge government deficit inherited from Labour, it could be suggested that Osborne should not have cut spending on infrastructure and capital to such a degree, allowing the UK to invest to boost productivity.

In conclusion, had the enormous intervention by governments not happened, the impact of the Financial Crisis would have been significantly greater. This means that it can be argued that the Great Depression did, to some extent, influence the response to the 2008 Financial Crisis as it persuaded governments to intervene quickly and at great expense in order to avoid a repeat of events in the 1930s. However, it can be argued that these policies were not as successful as envisaged due to the complexity of new financial instruments.

GROW 2.0 – Being Human in an AI World

On Saturday 21st September we host our second Grow Pastoral Festival. The theme for this year is an examination of what it is to be human in a machine age. What questions should we be asking about the way technology affects our lives and what are our hopes for the future? More specifically, how will our young people develop and grow in a fast-paced, algorithmically driven society and what might education look like in the future?

 
In the morning session Professor Rose Luckin and Professor Robert Plomin will be giving keynote addresses, and then talk with our Director of Digital Learning & Innovation, Rachel Evans.
Prof Luckin specialises in how AI might change education; Prof Plomin has recently published Blueprint, a fascinating read about genetics and education. We can’t wait to talk about how education might get personalised, and how that change might affect our experience of learning.

In the afternoon we’ll dive into some provocative debate with Natasha Devon, Hannah Lownsbrough and Andrew Doyle, addressing questions of identity, wellbeing and community in an online age with our own Assistant Head Pastoral, Ben Turner.

So what kind of questions are in our minds as we approach this intellectually stimulating event? Ben Turner brings a philosophical approach to the topic.


Is our ever-increasing reliance on machines and subscription to the ‘universal principles of technology’[1] eroding our sense of empathy, compassion, truth-telling and responsibility?



Our smartphones give us a constant connection to an echo-system that reflects, and continuously reinforces, our individual beliefs and values. Technology has created a world of correlation without causation, where we understand what happened and how it happened but never stop to ask why it happened. Teenagers are understandably susceptible to an eco-system of continuous connection, urgency and instant gratification. It is these values that they now use to access their world and that inform them what is important in it.

Are tech giants like Amazon, Google and Facebook creating a monoculture that lacks an empathy for its surroundings? If we all become ‘insiders’ within a technology dominated society, pushing instant buttons for everything from batteries to toilet roll, are we losing the ability to see things from a fresh perspective? By raising children in a world of instant access and metropolitan monism are we creating only insiders; young people who will never gain the ability to step back and view what has been created in a detached way. How as parents, schools and communities do we keep what is unique, while embracing the virtues of technological innovation?

Is social media destroying our free will?

If you are not a determinist, you might agree that free will has to involve some degree of creativity and unpredictability in how you respond to the world. That your future might be more than your past. That you might grow, you might change, you might discover. The antithesis to that is when your reactions to the world are locked into a pattern that, by design, make you more predictable – for the benefit of someone or something else. Behaviourism, developed in the 19th Century, believes in collecting data on every action of a subject in order to change something about their experience, often using punishment or reward to enact the change. Is social media, through its algorithms, gratification systems and FOMO, manipulating our actions and eroding our free will?

Social media is pervasive in its influence on the beliefs, desires and temperaments of our teenagers and you do not have to be a determinist to know that that will lead to a disproportionate level of control over their actions. Does social media leave our young people with no alternative possibilities; locked in a room, not wanting to leave but ignorant to the fact that they cannot?

Is social media the new opium of the masses?

Social media has changed the meaning of life for the next generation. The change in human contact from physical interactions to those, arguably superficial, exchanges online is having not only a well-documented detrimental effect on individual young people but also on the very fabric and makeup of our communities.

In addition to the ongoing concerns about privacy, electoral influence and online abuse, it is becoming increasingly obvious that social media has all the qualities of an addictive drug. Psychologists Daria Kuss and Mark Griffiths wrote a paper finding that the “negative correlates of (social media) usage include the decrease in real life social community participation and academic achievement, as well as relationship problems, each of which may be indicative of potential addiction.”[2]

That is not to say that everyone who uses social media is addicted. However, the implications of the ‘heavy’ usage of social media by young people are increasingly painting an unpleasant picture. The UK Millennium Cohort Study, from the University of Glasgow, found that 28% of girls between 13 and 15 surveyed spent five hours or more on social media, double the number of boys survey who admitted the same level of usage. Moreover the NHS Digital’s survey of the Mental Health of children and young people in England[3], which found that 11 to 19 year olds with a “mental disorder” were more likely to use social media every day (87.3%) than those without a disorder (77%) and were more likely to be on social media for longer. Rates of daily usage also varied by type of disorder; 90.4% of those with emotional disorders, for example, used social media daily.

Panel Discussion

However, there is more to this than just the causal link between the use and abuse of social media and poor mental health. With the march of technology in an increasingly secular world, are we losing our sense of something greater than ourselves? Anthony Seldon calls this the “Fourth Education Revolution”, but as we embrace the advances and wonders of a technologically advanced world do we need to be more mindful of what we leave behind? Da Vinci, Michelangelo and other Renaissance masters, not only worked alongside religion but also were inspired by it. Conversely, Marx believed Religion to be the opium of the people. If social media is not to be the new opium, we must find a place for spirituality in our secular age. Even if we are not convinced by a faith, embracing the virtues of a religious upbringing seems pertinent in these turbulent times. Namely inclusivity, compassion and community, because if we do not, then very quickly the narcissistic immediacy and addictive nature of social media will fill the void left in our young peoples’ lives, becoming the addictive drug that Marx forewarned against.


References:

[1] Michael Bugeja, Living Media Ethics: Across Platforms, 2nd Ed. 2018

[2] Online Social Networking and Addiction – A review of Psychological Literature, Daria J. Kuss and Mark D. Griffiths, US National Library of Medicine, 2011

[3] November 2018