Japan- a culture to die for? Cultural attitudes to suicide in Japan and the West

Wimbledon High History

Gaining publicity following Youtuber Logan Paul’s video filmed in Aokigahara, one of Japan’s suicide hotspots, the extremely high suicide rate in Japan has been featured increasingly in Western news. In this article, Jess Marrais aims to explore possible historical and traditional reasons for both Japan and Western attitudes towards suicide.

The world of YouTube and social media crossed over into mainstream media on 1st January 2018 following a video uploaded by popular YouTuber, Logan Paul. Paul and a group of friends, while traveling around Japan, decided to film a video in ‘Aokigahara’, a forest at the base of Mt Fuji, famous as the second most popular suicide location in the world. The video, which has since been taken down, showed graphic images of an unknown man who had recently hanged himself, and Paul and the rest of his party were shown to joke and trivialise the forest and all that it represents.

Unsurprisingly, Paul received a lot of backlash, as did YouTube for their lack of response in regards to the video itself. This whole situation has restarted a discussion into Japanese suicide rates, both online and in mainstream media sources such as the BBC.

In the discussions surrounding the problem, I fear that little has been said in the UK about the cultural attitudes in Japan towards suicide, and how drastically they conflict with the historical beliefs entrenched in our own culture.

In Christianity, suicide is seen as one of the ultimate sins- to kill oneself is to play God, to decide when a soul should leave the Earth, and breaks one of the 10 Commandments (‘Thou shall not murder’). Historically, those victim to suicide were forbidden from having a Christian funeral or burial, and it was believed that their souls would have no access to heaven. As a result of this, it makes sense that in Christian countries suicide is frowned upon. We in the West view the high suicide rate in Japan, and other East-Asian countries, through our own cultural understanding; while in actual fact, the problem should be seen within the context of the cultural and historical setting of the countries themselves.

In Japan, the history of the samurai plays a large role in attitudes towards suicide. The samurai (military nobility) had monopoly over early Japan, and they lived by the code of ‘Bushido’- moral values emphasising honour. One of the core values of Bushido was that of ‘seppuku’- should a samurai lose in battle or bring dishonour to his family or shogun (feudal lord), he must kill himself by slitting open his stomach with his own sword in order to regain his- and his family’s – honour in death. Due to the prominent role the samurai played in Japanese society, this idea of killing oneself to regain honour seeped into all aspects of society, thanks to personal and familial honour being a central part of Japanese values, even today.

More recently, this warrior attitude to death can be seen in the famous World War II ‘kamikaze’ pilots- pilots who purposefully crashed their planes, killing themselves and destroying their targets (usually Allied ships). These pilots were typically young, and motivated by the prospect of bringing honour to their family and Emperor in death. During the war, 3,682 kamikaze pilots died, spurred on by the samurai code of Bushido.

In modern day, suicide is seen by many in Japan as taking responsibility. Suicide rates in Japan soared after the 2008 financial crash, reaching their highest at the end of the 2011 economic year. Current statistics say around 30,000 Japanese people of all ages commit suicide each year, as opposed to 6,600 per year in the UK.  Increasing numbers of Japan’s aging population (those over 65) are turning to suicide to relieve their family of the burden of caring for them. Some cases even say of unemployed men killing themselves to enable their family to claim their life insurance, in contrast to the UK where suicide prevents life insurance being from claimed. Regardless of the end of the samurai era and the Second World War, the ingrained mentality of honour drives thousands of people in Japan to end their own lives, motivated not only by desperation, but also the desire to do the right thing.

If anything can be taken away from this, it is to view stories and events from the cultural context within which they occur. While suicide is a tragic occurrence regardless of the country/culture in which it happens, social pressures and upbringing can – whether we are aware of it or not – influence a person’s actions. If this lesson can be carried forward to different cultures and stories, we will find ourselves in a world far more understanding and less judgemental than our current one.

Follow History Twitter: @History_WHS

Suicide hotlines:

  • PAPYRUS: support for teenagers and young adults who are feeling suicidal – 0800 068 41 41

Further reading:

‘He’s gotta be strong and he’s gotta be fast and he’s gotta be fresh from the fight…’ Achilles vs Odysseus: Who is the greatest hero?

By Anna Jeffries-Shaw, Year 12.

In the following post, Anna hopes to make you question the concept of heroism by exploring the characters of two of the most famous heroes from the Ancient World: Achilles and Odysseus.

Heroes are prevalent in everyone’s life. Whether your hero is a real person or a character from a movie, someone close to you or someone you have never met, everybody has some sort of hero or role model. The concept of a hero, however, has existed for millennia, dating back to Ancient Greece. Here, we discover some of the most famous heroes to have existed: Hercules, Hector, Aeneas, Perseus, Theseus, Jason, and Atalanta (curiously the only female who repeatedly makes lists about the top heroes in Greek mythology). And of course, arguably two of the most famous men and heroes of history: Achilles and Odysseus.

Before it is possible to begin to tackle the question of which of these men was the greatest hero, it is first necessary to explore the even greater question of what constitutes a hero. The OED defines a hero as ‘a person who is admired for their courage, outstanding achievements, or noble qualities’. In other words, a role model. In Greek tradition, however, a hero was a human, who was endowed with superhuman abilities by virtue of being descended from an immortal god. A hero in that era would not have any of the additional connotations we’ve come to expect of moral worth, valour and so on. In fact, the Greek word ‘ἥρως’ (pronounced ‘heros’), which is usually translated as hero, actually means just a ‘warrior’. And so this debate seems futile, in a sense, as expecting ‘heroism’ of either Achilles or Odysseus – to expect them to conform to any of our ideas of what a ‘hero’ is – is an anachronism.

Nevertheless, it is a topic worth exploring: which man was more heroic?

Achilles: the hero of the Iliad. Brutal, vain, pitiless… and thus a true hero. He does not fit modern conventions of morality. He is a killer, a rapist, a plunderer. He is temperamental, which has dire consequences right from the beginning, revealed by the opening lines of the Iliad:

“Anger be now your song, immortal one,

Achilles’ anger, doomed and ruinous,

that caused the Achaeans loss on bitter loss.”

(Translated by Robert Fitzgerald)

He can be pitiless, and he can be murderously cruel. Yet there is still something fundamental about him to which we can all relate. He may be an original Byronic hero, fitting the description of the literary character named after Lord Byron, a poet who was part of the Romantic movement in literature, before the term was even coined. The Byronic hero is usually dark and moody, sexually intense, mysterious, emotional troubled and arrogant and Achilles is all these things. He is expected to perform numerous heroic deeds, yet he disagrees, complains, and is willing to go to any length just to prove he’s right. He’s not necessarily the kind of person one wants to be, but certainly the kind of person one can relate to.

Left: Brad Pitt as Achilles in Troy (2004) Right: Sean Bean as Odysseus in Troy (2004)

Contrasting to the seemingly brutish Achilles is Odysseus, the hero of the Odyssey, which begins thus:

“Sing to me of the man, Muse, the man of twists and turns

driven time and again off course, once he had plundered

the hallowed heights of Troy.”

(Translated by Robert Fagels)

His key feature is his cunning. He is not primarily a rash or fierce hero, although his physical strength and other conventional aspects of Ancient Greek heroism are not to be overlooked. Odysseus is a multi-faceted hero. He is ‘πολυτροπος’ (pronounced ‘polutropos’). This is translated in a variety of different ways, with different implications. In the Fagels translation above, it is rendered as ‘man of twists and turns’, in others ‘the man of many ways’. Yet the underlying message about his character is evident: he can morph into a wide variety of different identities. And for what? In order to survive. It is his cunning, ultimately, that leads to the sacking of Troy as a result of the legendary Trojan Horse. It is ironic that Achilles, whose physical power was not able to destroy Troy, gets to be the number one hero of the Iliad, and not Odysseus, who succeeded where Achilles failed. Odysseus is seemingly incomparable; his fame cannot come from the fall of Troy.

Whilst it is seemingly impossible for either to fit our modern sensibilities of heroism, both hold elements. Many have questioned, of Achilles, whether kindness, altruism, generosity, and modesty were just seen as weaknesses to the fierce and brave exterior. In fact, Achilles spares Priam’s life in book 24 of the Iliad, returning Hector’s body and even calling him ‘dear old man’. Is this kindness? His genteel character may be seen in his relationship with Patroclus, which is explored in Madeline Miller’s ‘Song of Achilles’ and in which Achilles and Odysseus are shown to be the only two characters who can maintain loving relationships.

Instead, we must consider two key elements central to heroism in this era: ‘kleos’ and ‘nostos’. ‘Kleos’ literally translates as fame and glory, whereas ‘nostos’ is described as a ‘song of safe home coming.’ It is Achilles’ destiny to chose between the two in the famous prophecy: he must either die a glorious death at a young age, or live until old age unfruitfully. He choses the former; he chooses to have kleos. On the other hand, upon Odysseus’ return to Ithaca, in disguise, it will take him a long time before he can prove to everyone that he really is the King of Ithaca, to re-establish his identity and ultimately achieve nostos. However, he is seemingly one above Achilles in this way: he attains his kleos from his nostos.

Both of these heroes are undeniably human men with the capacity for goodness, love and bravery. And whilst I believe Odysseus to be the greater hero, it is a debate that can never be settled simply because no one can know definitively what a hero is.

Follow @Classics_WHS on Twitter

O Chemistree, O Chemistree: The Wonder of Chemistry at Christmas

By Georgina Hagger, Year 12.

In this article I will endeavour to convince you of the magic of Chemistry, through Christmas related examples, and why we should all care a little bit more about not only the science itself but its contribution to our daily lives.

It’s the most wonderful time of the year, and whilst we all enjoy the lights, presents and the much-anticipated food, the reason behind all of these is forgotten. What makes your turkey go brown, what makes the smell of Christmas trees so enticing and what do your wrapping paper and Sellotape all have in common? To answer all these questions, we need one thing only: Chemistry. Chemistry is what makes this time of the year so enjoyable and yet it is overlooked, ignored and underrated.

When cooking many foods, a reaction called the Maillard Reaction is undergone: such is the case with the iconic Christmas Turkey. This is a chemical reaction between reducing sugars (for example glucose) and amino acids, and the different combinations of these two components is what makes the many different flavour compounds produced in this reaction. In turkey, some of these compounds are furans which produce the meaty, burnt flavours and also pyrazines for the cooked, roasted flavours. This reaction is what makes crisps go golden brown, along with giving some meat its brown colour, as melanoidins are formed which contribute to the brown colouration in cooking.

The smell of Christmas Trees, and pine trees more generally, is much-loved. This scent comes from three main compounds; the two types of pinene (alpha-pinene and beta-pinene) and bornyl acetate. It is this bornyl acetate that produces the pine smell, making it commonly used in fragrances and air conditioners for that fresh aroma. This smell originates from the just three elements that the compound is made from: carbon, hydrogen and oxygen.

When giving a gift at Christmas, or any other time of the year, the wrapping of the present is an important part. Whilst, wrapping paper and Sellotape do not immediately seem to be that similar, they are in fact both based on the same fundamental compound, the very same compound that gives plants their strength: cellulose. Whilst Sellotape needs an additional adhesive element to it, these two items are largely similar.

These ideas are all easy to understand, yet they are never talked about. Chemistry is simply defined as “the branch of science concerned with the substances of which matter is composed” and then how these substances react with each other. When the discipline is defined in such a way it is hard to see how this cannot be part of our everyday lives. Rosalind Franklin, the brilliant and unfortunately often forgotten chemist, once said:

“Science and everyday life cannot and should not be separated.”

However, we seem to have strayed from this, and now Chemistry is just for the people in white coats and goggles, whilst the vast majority of the others, according to a 2015 survey by the Royal Society of Chemistry, seem to only associate the subject with their school days and scientists. Yet we take selfies on our lithium powered smart phones, brush our teeth with our fluoride filled toothpastes and cure headaches with medicine without even knowing how any of this actually happens.

You may now ask, why do we need to know about Chemistry? And there are so many answers to that question; the emergence of disciplines like Green Chemistry to combat the disastrous effect we have on our planet and the shortage of engineers in this country alone, means more Chemists are needed now than ever before. As well as this, there is the simple answer of why should people not know, why should everyone not have the chance to understand the world around them? In recent weeks we have seen guides written by scientists, including chemists, to explain the use of scientific methods – such as DNA fingerprinting – to judges in order to aid better understanding of the chemistry that is used to prosecute and defend people in court. This is just one example of how chemistry is returning to the forefront of society and so needs to be understood.

By encouraging the sciences, and encouraging the explanation of the chemistry we all use; this makes one area of science so much more interesting and accessible to everyone. If everyone can hear about how this discipline is connected to their current situation through the engaging explanations of something like Christmas or cooking or electronics, then perhaps less people will feel marginally indifferent about Chemistry and more will feel interested and passionate about a subject that richly deserves and needs it.

So, as you pull a cracker this Christmas, become disgusted at the bitter taste of a brussels sprout, or watch the fireworks explode at New Year, remember to think about why and how these things happen and add a little bit of Chemistry induced magic to your life.

Follow @Chemistry_WHS on Twitter.

The importance of female composers and musicians in shaping the musical world

By Anna Kendall, Year 12.

When considering the world of classical music, the minds of most are filled with images of Mozart and Beethoven, Purcell and Vivaldi, Chopin and Grieg, all tremendous virtuosos whose compositions were fundamental in creating and developing the musical world. However, these pioneers all have one uniting quality: they are all male. For many, and indeed for myself, it is a challenge to think of even just one influential female composer, whilst it is easy to list countless prolific men.

Despite being regarded as inferior to the opposite sex in terms of importance in the history of music, for over a millennium, women have been composing great works, beginning with Hildegard von Bingen in the 12th century, right through to the present day. Women have in fact made a significant contribution to the musical world which should not be overlooked.

Not only a composer of some 70 works, Hildegard Von Bingen (1098-1179) was a German Benedictine Abbess, writer, mystic and visionary. Attention in recent decades to women of the medieval Church has led to a great deal of popular interest in Hildegard’s music. Her most notable work is Ordo Virtutum (Play of the Virtues), a morality play which was thought to have been composed as early as 1151. The key feature of the work is how it exhibits her musical style: in the play, as with the majority of her works, the music is described as monophonic, that is, consisting of exactly one melodic line which dominates the piece. Her style is characterised by soaring melodies that pushed the boundaries of the typical chants of the medieval period. In this way, Hildegard was able to conform to the traditions of 12th century evolutions of chant whilst simultaneously pushing those evolutions, which in many cases was through her use of melismatic (rather than the traditional syllabic) recurring melodic units.

Moreover, despite Hildegard’s self-professed view that the purpose of her compositions was the praise of God, some scholars have asserted that Hildegard made a close association between music and the female body in her musical compositions. In her Symphonia (a collection of liturgical songs), the poetry and music could be concerned with the anatomy of female desire and could thus be described as Sapphonic, connecting her to a history of female rhetoricians. From this, it seems astonishing that such a key figure of the early musical world can go unnoticed: Hildegard’s ideas lay the foundations for many great works.

Moving forward to the Romantic period, a more well-known female composer is Fanny Mendelssohn (1805-1847). Sister of the distinguished composer Felix Mendelssohn, Fanny composed more than 460 works, including a piano trio and several books of piano pieces and songs. Having learned the piano from a young age, in 1820 Fanny, along with her brother Felix, joined the Sing-Akademie zu Berlin which was led by Carl Friedrich Zelter. Zelter at one point favoured Fanny over Felix: in an 1831 letter to a friend he described Fanny’s skill as a pianist with the highest praise for a woman at the time: “She plays like a man.”

Notwithstanding her abilities, she faced numerous trials whilst trying to compose. Fanny was limited by prevailing attitudes of the time toward women, attitudes apparently shared by her father, who was tolerant, rather than supportive, of her activities as a composer. Her father wrote to her in 1820 “Music will perhaps become [Felix’s] profession, while for you it can and must be only an ornament”. Her piano works are often in the style of songs and carry the title, ‘Song without Words.’ This style of piece was successfully developed by Felix, though some assert that Fanny preceded him in the genre, and the question of who out of the siblings is more rightly deserving of credit for this style is debated amongst scholars. Nevertheless, Fanny was a key composer of the Romantic period who should not be hidden under the shadow of her brother.

The wife of Robert Schumann and herself one of the most distinguished pianists of her time, Clara Schumann (1819-1896) enjoyed a 61-year concert career. She was an incredible virtuoso, and was able to change the format and repertoire of the piano recital and the tastes of the listening public in the Romantic era. She was one of the first pianists to perform from memory, making that the standard for concertizing. Trained by her father to play by ear and to memorise, she gave public performances from memory as early as age thirteen, a fact noted as something exceptional by her reviewers.

However, for many years after her death Clara Schumann was not widely recognized as a composer. As part of the broad musical education given her by her father, Clara Wieck learned to compose, and from childhood to middle age she produced a good body of work. Clara wrote that “composing gives me great pleasure… there is nothing that surpasses the joy of creation, if only because through it one wins hours of self-forgetfulness, when one lives in a world of sound”. At the young age fourteen she wrote her piano concerto, with some help from Robert Schumann (a childhood companion who would later become her husband). However, as she grew older, she sadly became more preoccupied with other responsibilities in life and found it hard to compose regularly, writing, “I once believed that I possessed creative talent, but I have given up this idea; a woman must not desire to compose—there has never yet been one able to do it. Should I expect to be the one?”. This self-doubt caused her to stop composing altogether: her compositional output decreased notably after she reached the age of thirty-six.

Today, her compositions are increasingly performed and recorded, and Clara is beginning to become recognised for her contributions, both as a performer and as a composer.

As well as these three key figures, there are countless other female composers throughout history who have helped to shape the musical world: Hildegard, Fanny and Clara are a brief introduction to a group of lost pioneers. It is in this modern age that we are able to uncover the hidden stories and works of these tremendous women, and I am hopeful that the absence of females in musical history may be unwritten, and that these women may finally get the recognition they deserve.

Historical Guilt: Sorry seems to be the hardest word

Wimbledon High History

By Millie McMillan, Year 12.

The debate surrounding the question of historical guilt is a controversial and emotionally fraught conversation.

For the campaigners that lobby governments to admit their involvement and express their repentance for past actions, an apology is a necessary step towards reconciliation. For the government, and many citizens, an official apology seems like a misguided action, conceding their financial liability and provoking backlash from their citizens. This question can be explored through the example of the British Empire; regarded as a symbol of colonial oppression by its critics, and a nostalgic reminder of bygone British power by others.

“The United Kingdom is one of the few countries in the European Union that does not need to bury its 20th century history.”

This statement, allegedly tweeted by Liam Fox (Secretary of State for International Trade) in March 2016, exposes many of the problems surrounding the issue of historical guilt, for two reasons. Firstly, the word ‘bury’ is intriguing, implying the general desire of nations to hide or suppress their own questionable historical actions by refusing to address the significance and impacts of them. Whilst this is not at all surprising, it does raise questions about many nations not accepting accountability for their actions, and whether this is the best way to approach a future of reconciliation. Moreover, this statement exemplifies how many citizens see British imperial history in a ‘nostalgic’ light. However, whilst one can disagree with the sentiment expressed, it is the wider repercussions of such attitudes that are almost more alarming.

The question lies not with whether Britain should bury her history, but why it is perceived that nations need to bury theirs in the first place.

You may have personal grievances with the way in which Fox chooses to view the British Empire, yet even disregarding this fact, his statement is a symbol of a wider culture of glorifying historical achievements, whilst vehemently refusing to acknowledge those which we would rather forget. We feel the need to bury our morally ambivalent actions, producing a warped view of historical events.

Surely it is this very approach; of sweeping historical misdeeds under the carpet, of equating ‘forgetting’ with ‘forgiving’, that is most detrimental?

This question of historical guilt has but another facet – it is not only about whether we should apologise, but also if we are able to. The generations of today had no input into past actions, and therefore an apology is either a reconciliatory mark of separation from past mistakes, or an ‘empty’ gesture, with little significance or substance behind it. If an apology is defined as an expression of one’s regret at having wronged another, then it seems not only counterintuitive, but disingenuous to deliver an apology for an action that you didn’t commit.

Nevertheless, if we choose to view an apology as a necessary step towards changing attitudes and actions, an opportunity for education along with a sign of mutual respect… it renders an apology far more significant and long lasting. A strained apology is hollow and superficial; a sincere apology offers solace and closure, twinged with optimism for a future encompassing different approaches and education.

Tony Blair’s 2006 expression of “deep sorrow” is the closest to an apology for the activities of the empire that the British Government has released thus far. Meanwhile, in other cheery news, a poll in 2014 revealed that 59% of those surveyed believed the British Empire was “something to be proud of”. It is not that the British Empire was solely a negative influence, but it is this perception of it being a source of ‘pride’ to generations that had no involvement in its creation or demise that seems somewhat confusing.

It is indicative of a flaw in the way in which the education system chooses to portray British history, glossing over the barbaric aspects of British rule and igniting a misplaced sense of patriotism amongst those eager to listen.

The question of whether countries should apologise for their actions remains, and will likely be a contentious issue for many years to come.

It is certain that we can no longer continue to ‘forget’ the events of the past. This approach achieves nothing, except fostering a culture of ignorance and misguided ‘pride’. A reformed approach to national education regarding the perceptions of Britain’s past is surely the best way in which historical guilt can best be repented. An apology is but mere words; however, the longevity of an informed population with changed mind-sets, who no longer believe their homeland is infallible, is undoubtedly more significant.

Let us not feel the ‘need to bury’ the mistakes of the past, and instead use enhanced knowledge and wisdom of our history to create a better future.

Behind Closed Doors: The secret worlds within us…

By Rahi Patel, Year 12.

Have you ever wondered where the common phrase ‘gut feeling’ stems from or how this meandering smooth tissue could be related to such complexities as emotions?

For centuries Aryuvedic medicine (An ancient Indian branch of medicine) has regarded the gut as the centre of our wellbeing; however, in modern medical practice this once revered organ has been pushed to the side in order to make way for the big players: the brain and the heart. However, recent developments in the medical field are beginning to reveal evidence to prove this ancient theory: showing us that our ‘gut feelings’ truly are the most significant.

In order to understand this rather counter-intuitive principle we must first establish the functions of our brain; it is the centre of conception and movement, the organ that led us to the discovery of electricity, and the organ helps us to coordinate the complexities of standing for Mrs Lunnon in a Monday morning assembly. Although we have created a strong link between the term ‘self’ and our brains we can see through the exploration of this underrated organ, the gut, that there may be more to ‘ourselves’ than what lies behind the eyes.

Our guts possess a multitude of nerves found uniquely in this part of the body. This immediately poses the question of why such an elaborate and complicated system would be needed if the sole purpose of the gut were to create a pleasant journey for food to move from the mouth through to the colon, accompanied by the occasional release of gaseous sounds?

So the scientists amongst us must all be wondering where the evidence is to support these new claims. Well, several experiments have been conducted around the world highlighting the importance of our gut with regard to mental well-being.

The ‘forced swimming test’ is a commonly used experiment to assess the effectiveness of antidepressants. A mouse is placed in a basin of water too deep for it to stand in, so it is forced to swim; mice with depressive tendencies give up swiftly, however, the ones with greater motivation persevere. Most antidepressants tend to increase the time that a mouse swims for, before giving up. One scientist, John Cryan decided to feed half the mice with Lactobacillus Rhamnosus: a bacterium widely known to be beneficial for gut health. Impressively, the mice with enhanced gut health not only swum for longer, but their blood also contained significantly less stress hormones.

Cooperation between the gut and the brain, via the vagus nerve, is thus proving to be a promising field for the curing of mental disorders and diseases. The gut is our largest sensory organ, so it would only make sense for the brain to form a close relationship with it, to create a detailed image of the state of our bodies, given ‘knowledge is power’. This understanding is helping to shed a light on complex neurological diseases, such as depression, as scientists are now aware that there is more to the ‘self’ than the brain, questioning the philosophical proposition of ‘I think therefore I am’…maybe we should adapt this to ‘I eat, then I think, therefore I am’.

Lining the labyrinth of organs known as the gut are approximately 100 trillion bacteria (weighing 2kg) eagerly waiting to help us break down and assimilate the billions of particles that enter our bodies each day. They also help to produce new vitamins, for example sauerkraut is significantly higher in vitamins than plain cabbage.

Not only do our bacteria increase nutrient values in our food they also advise us on the foods that we should be eating – a perplexing idea I know! But what we eat is a matter of life and death for our friendly cohabitants, so it only makes sense for them to influence our choices. In order to trigger a craving the brain must be accessed, which is a tough feat considering the armour-like meninges and the blood – brain barrier. Bacteria can synthesise molecules small enough to access our brains, such as the amino acids, tyrosine and tryptophan, which are converted into dopamine and serotonin within the brain. Of course this is not the only way in which cravings materialise, but it is far easier to control our brain with bacteria than with genes, which may help to pave the future of treatments for diseases such as hypertension and diabetes.

So next time you wonder why you’re craving a tortilla or your favourite brie, just eat it, since 95% of serotonin (the happiness hormone)  is produced by the gut and we all now know the significance of ‘gut feelings’ on our well-being!

The Rapid Growth of Artificial Intelligence (AI): Should We Be Worried?

By Kira Gerard, Year 12.

“With artificial intelligence we are summoning a demon.” – Elon Musk

In 2016, Google’s AI group, DeepMind, developed AlphaGo, a computer program that managed to beat the reigning world champion Lee Sedol at the complex board game Go. Last month, DeepMind unveiled a new version of AlphaGo, AlphaGo Zero, that mastered the game in only three days with no human help, only being given the basic rules and concepts to start with. While previous versions of AlphaGo trained against thousands of human professionals, this new iteration learns by playing games against itself, quickly surpassing the abilities of its earlier forms. Over 40 days of learning by itself, AlphaGo Zero overtook all other versions of AlphaGo, arguably becoming the best Go player in the world.

Artificial intelligence is defined as a branch of computer science that deals with the simulation of intelligent behaviour in computers, allowing machines to imitate human behaviour in highly complex ways. Simple AI systems are already wide-spread, from voice-recognition software such as Apple’s Siri and Amazon Echo, to video game AI that has become much more complex in recent years. It plays a key role in solving many problems, such as helping with air traffic control and fraud detection.

However, many people are concerned with the continued advancement of artificial intelligence potentially leading to computers that are able to think independently and can no longer be controlled by us, leading to the demise of civilisation and life as we know it. In 2014 Elon Musk, the tech entrepreneur behind innovative companies such as Tesla and SpaceX, stated in an interview at MIT that he believed that artificial intelligence (AI) is “our biggest existential threat” and that we need to be extremely careful. In recent years, Musk’s view has not changed, and he still reiterates the fear that has worried humanity for many years: that we will develop artificial intelligence powerful enough to surpass the human race entirely and become wholly independent.

As demonstrated in a multitude of sci-fi movies – 2001: A Space Odyssey, The Terminator, Ex Machina, to name a few – artificial intelligence is a growing concern among us, with the previously theoretical concept becoming more and more of a reality as technology continues to advance at a supremely high pace. Other scholars, such as Stephen Hawking and Bill Gates, have also expressed concern about the possible threat of AI, and in 2015 Hawking and Musk joined hundreds of AI researchers to send a letter urging to UN to ban the use of autonomous weapons, warning that artificial intelligence could potentially become more dangerous than nuclear weapons.

This fear that AI could become so powerful that we cannot control it is a very real concern, but not one that should plague us with worry. The current artificial intelligence we have managed to develop is still very basic in comparison to how complex a fully independent AI would need to be. AlphaGo’s Lead Researcher, David Silver, stated that through the lack of human data used, “we’ve removed the constraints of human knowledge and it is able to create knowledge itself”. This is an astonishing advancement, and signals huge improvements in the way we are developing artificial intelligence, bringing us a step closer to producing a multi-functional general-purpose AI. However, AlphaGo Zero’s technology can only work with tasks that can be perfectly simulated in a computer, so highly advanced actions such as making independent decisions are still out of reach. Although we are on the way to developing AI that matches humans at a wide variety of tasks, there is still a lot more research and development needed before advanced AI will be commonplace.

The artificial intelligence we live with every day is very useful for us, and can be applied in a variety of ways. As addressed by Mr Kane in last week’s WimTeach blog, technology has an increasing role in things such as education, and we are becoming ever more reliant on technology. Artificial intelligence is unquestionably the next big advancement in computing, and as Elon Musk stated in a recent interview: “AI is a rare case where I think we need to be proactive in regulation instead of reactive… by the time we are reactive in regulation it is too late.” As long as we learn how to “avoid the risks”, as Hawking puts it, and ensure that we regulate the development of such technologies as closely as we can, our fears of a computer takeover and the downfall of humanity will never become reality.

Decolonising the Canon of English Literature

By Ava Vakil, Year 12.

If the purpose of literature is to represent the culture and tradition of a language or a people, can we really profess ourselves to be true students of literature when seemingly only focusing on a single culture and its peoples?

Such has been the question of a group of students from Cambridge University these past few weeks; there has been a cry from undergraduates to “decolonise” their English Literature syllabus by taking in more black and minority ethnic writers, and bringing more expansive post-colonial thought into the curriculum.

A kindred instance occurred at Yale University in May of last year, where there was widespread criticism of the requirements to graduate as a Yale English major. As it stands, a student is able to fulfil the requirements of the revered course without studying the literature of a single woman or minority writer.

However, as always after a plea for diversity, there comes the inevitable “But…(insert the name of any women/minority)!”.

And whilst this may be true – and the likes of Austen and the Brontës have themselves a fairly fixed place within the Canon of English Literature – it is simply not good enough; not only are women and minorities few and far between, but they tend to offer what I consider ‘one-step diversity’. This being white women, or gay men, or anyone who represents only one shift away from the ‘norm’ of the straight, white cis-gender men. Where are the black female trans writers, and why aren’t they a key part of our education?

There is an urgent need to address the homogeny of the curriculum within many universities and schools, along with the canon itself. The reason for this is not just diversity for diversity’s sake (though this has many benefits in itself), but because we are narrowing and constricting our understanding of literature and context by ignoring writers simply because they don’t have a place in the literary canon.

This does not mean refusing to study Shakespeare, Milton, Keats, Frost etc. but simply broadening our conceptualisation of what English Literature is.

As Dr Priyamvada Gopal, a teaching fellow at Churchill College (Cambridge) puts it:

“It is not just about adding texts but about rethinking the whole question of Britishness, Englishness and what they mean in relation to the empire and the post-imperial world… questions of race, gender, sexuality and so on.”

We are hampering and inhibiting our own knowledge under the colonial guise of the canon. Surely it should be impossible to study Othello or Jane Eyre without considering the post-colonial context? Or Twelfth Night without a wider multidisciplinary study of gender and sex?

Though it is against the nature of universities to want to politicise their curriculum, this happens by default when the syllabus simply reflects the age-old and continuing social, literary (and political) repression of anyone classified as “other”. Hence, cries from Twitter trolls about this being a ‘patrolling’ of the curriculum to suit and accord to the views of particular women and minority groups are intrinsically hypocritical.

The canon of literature has forever accorded to the politics of the majority, and appeals to change this are no more political than the sexist, racist and colonialist nature of the canon in the first place.

The need to change this system of subtle repression of writers within education must come from both professors/teachers and students alike. Though there are concrete changes which need to be made in terms of legislation of the actual syllabus, as students we have a large part to play.

Read widely and read critically; consider racial and gender context; rewrite and reclaim what you consider “classic”. Most importantly, investigate the hidden under-belly of the canon of English literature – the texts that are excluded have just as big a part to play in the shaping of our society as the texts which sit smugly on the exclusive list.

“Let’s make our bookshelves reflect the diversity of our streets.” – Phil Earle

“British policy towards India changed completely from 1857-76.” How far do you agree?

Wimbledon High History

By Ellie Redpath, Year 12.

The Indian Mutiny of 1857-8 resulted in a change to British policy towards India from an idealistic view, with the hopes that India would one day have become civilised enough under British rule to self-govern, to one of resigned moral duty coupled with a heightened awareness of the need for cementing the security of the British Raj. However, it did not result in the complete eradication of the previous policies employed under Company rule. When policy is defined as the changes made by the British government with regards to the military system and administrative government of India, and the changes to economic strategy, it becomes apparent that the policies were altered in order to avoid provoking the revival of violence by imposing Western ideology on the indigenous people. Normality for the Indian people remained largely the same as before the Mutiny; these policies were introduced solely as insurance that the events of the Mutiny would never be repeated.

The differences to the administrative government of India implemented after the Mutiny can ostensibly be seen as drastic, yet in reality resulted in little change other than to consolidate the restriction of the power of the indigenous people. An Indian Civil Service was created and the Governor General renamed the Viceroy, creating an illusion of the upheaval of the East India Company’s goverance. Yet despite the change in title, the new Viceroy of India was in fact the same man who had been Governor General, Charles Canning, and largely took on the same role as before 1857. The only tangible alteration was that he worked for the Government rather than the Company. Moreover, the Indian Civil Service was mainly comprised of white British men, and whilst indigenous people were not prohibited from joining, the entrance tests were based in London, so it was made near impossible; this had not even changed several decades later in 1905, when a mere 5% were men from Bengal. The creation of the Civil Service therefore only served to strengthen the administrative control of the British over the Indians by limiting how much influence Indians had over their own government. Another ostensible change introduced by the British government was the return of authority to the indigenous rulers of the princely states, a reversal of Dalhousie’s Doctrine of Lapse. While this appeared to be an extreme shift from Britain’s policy pre-Mutiny, the Princes overwhelmingly complied with British legislation and the restoration of their power made little difference to everyday life; the British government gave back their former entitlements solely because it appeared to be respecting tradition. A considerable amount of bitterness had developed in recently annexed states such as Oudh, so this difference in policy was expected to help pacify the indigenous people to prevent future uprisings. Ultimately, the British changes to the administrative rule of India were not as severe for the majority as they could seem at first glance, and were made principally to cement British rule and influence in the subcontinent.

Britain’s modifications to the structure of the Indian military were slightly more radical because it was sepoys in the East India Company’s army who had begun the Mutiny, so to avoid a repeated occurance and confirm that Britain held power over the army, it was necessary for Britain to change its military organisation in a more extreme fashion than it had changed administrative or economic policies. In order to prevent the recurrance of a similar incident, the religions and castes of the regiments were mixed to cut off a sense of unity against the British. This was intended to avert a situation like that of the Brahmin caste before the Mutiny – members of the elite Brahmin were forbidden to travel across the sea, yet this custom was often overlooked or ignored by British generals, leaving them to harbour resentment against the British. In addition to this, eighty four percent of regiments in Bengal (where much of the resistance had originated) were replaced, in order to diffuse any remaining tension in the area between the sepoys and their white officers. The number of British officers supervising a sepoy regiment was increased, and weapons were left under British control when not being used directly in battle to ensure that any violence that broke out amongst sepoys would not immediately endanger the British generals. However, whilst more changes were enacted in regards to the Indian military than in Britain’s administrative or economic policy, they were almost all made with the objective of inhibiting the escalation of future conflicts between sepoys and their officers into full-scale revolutions. The statement could be made that because sepoys were treated with greater respect after the Mutiny, Britain’s aim was not to assert control over the Indian troops or remain distant from them, but rather to foster amiable relations between officers and their soldiers; yet this was another strategy used by Britain to create an illusion of interpersonal respect to avoid further provocation of the indigenous peoples. Hence the military strategies of the British towards India only changed significantly because they were the most relevant in preventing the reoccurance of a mutiny.

The changes to British economic policy towards India were not a complete reversal of policy under the East India Company, yet again the changes that were made were directed towards attempting to curb the economic progress and industrial independence of the indigenous people to secure British control over India. The British built over 3000 miles of railway after 1857, a vast distance compared to the mere 288 miles built under Company rule. This development, whilst not being entirely new –railway lines, despite being short distances, had already existed before the Mutiny – simultaneously benefitted British trade as it allowed them to transport their goods further distances, increasing their wealth over that of the Indian economy, and allowed British troops to reach and crush any uprisings in remote areas much quicker than they would have been able to otherwise. While one could argue that developing and promoting industry in remote areas was an equally important reason for the construction of railways, and thus that their purpose was not to consolidate the British Raj, Britain’s economic policies actually intended to hinder India’s industrial growth. The recently introduced policy of free trade made it far easier for Britain to bombard India with inexpensive British-manufactured goods, which India would often have provided the raw materials for. For example, India produced raw cotton for export to Britain, yet its textiles industry was crushed by imports of cheaper British cloth. India’s economic development was hence restrained as it remained reliant on exports of raw materials to Britian, but had no protected market in which to sell its own manufactured goods, so its own industry could not flourish when faced with British competition; Britain was therefore kept economically superior to India, securing its power over the country, whilst India was kept dependent on British trade for its economy to survive, strengthening its ties to Britain. Therefore, Britain’s economic policy somewhat changed after the Mutiny due to the addition of railways to hasten the transportation of troops, and the import of British manufactured goods to India to limit its industry, however because railways had first been developed by the East India Company, the adjustments were only made for the purpose of security over the region and were not as extreme that one could state that they were changed completely.

To conclude, the Indian Mutiny resulted in Britain altering its policy on India from that of forced Westernisation with the ultimate aim of India achieving self-government, to one primarily focused on retaining British control and security in the subcontinent. However, outside of this shift in emphasis, little was changed, for life itself was not made radically different for the indigenous people; instead, the differences were precautionary, to avert the recurrance of brutality and ensure Britain remained the dominant power in India.