The Theory of Deconstruction – 21/09/18

Ava (Year 13, Head Girl) explores the Theory of Deconstruction as suggested by Derrida and discusses the confusing nature of both ideas and words.

Deconstruction is a theory principally put forward in around the 1970s by a French philosopher named Derrida, who was a man known for his leftist political views and apparently supremely fashionable coats. His theory essentially concerns the dismantling of our excessive loyalty to any particular idea, allowing us to see the aspects of truth that might be buried in its opposite. Derrida believed that all of our thinking was riddled with an unjustified assumption of always privileging one thing over another; critically, this privileging involves a failure to see the full merits and value of the supposedly lesser part of the equation. His thesis can be applied to many age-old questions: take men and women for example; men have systematically been privileged for centuries over women (for no sensible reason) meaning that society has often undervalued or undermined the full value of women.

Now this might sound like an exceedingly overly simplistic world view, and that Derrida was suggesting a sort of anarchy of language. But Derrida was far subtler than this – he simply wanted to use deconstruction to point out that ideas are always confused and riddled with logical defects and that we must keep their messiness constantly in mind. He wanted to cure humanity of its love of crude simplicity and make us more comfortable with the permanently oscillating nature of wisdom.  This is where my new-favourite word comes in: Aporia – a Greek work meaning puzzlement. Derrida thought we should all be more comfortable with a state of Aporia and suggested that refusing to deal with the confusion at the heart of language and life was to avoid grappling with the fraught and kaleidoscopic nature of reality.

This cleanly leads on to another of Derrida’s favourite words: Differánce, a critical outlook concerned with the relationship between text and meaning. The key idea being that you can never actually define a word, but instead you merely defer to other words which in themselves do not have concrete meanings. It all sounds rather airy-fairy and existentialist at this level, but if you break it down it becomes utterly reasonable. Imagine you have no idea what a tree is. Now if I try and explain a tree to you by saying it has branches and roots, this only works if you understand these other words. Thus, I am not truly defining tree, but merely deferring to other words.

Now if those words themselves cannot be truly defined either, and you again have to defer, this uproots (excuse the pun!) the entire belief system at the heart of language. It is in essence a direct attack on Logocentrism, which Derrida understood as an over-hasty, naïve devotion to reason, logic and clear definition, underpinned by a faith in language as the natural and best way to communicate.

Now, Derrida clearly wasn’t unintelligent, and was not of the belief that all hierarchies should be removed, or that we should get rid of language as a whole, but simply that we should be more aware of the irrationality that lies between the lines of language, willingly submit to a more frequent state of “Aporia”, and spend a little more time deconstructing the language and ideas that have made up the world we live in today.

Classical Music – relevant to the youth of today? – 14/09/18

Louisa (Y13 Music Rep) investigates whether Classical Music is still relevant to young people of today and what can be gained from listening to it.

Classical music, once at the forefront of popular culture and entertainment, is nowadays often seen as a dying art. Frequently, classical music, an umbrella term for music spanning the baroque, classical and romantic eras, is described as an ‘elitist form of artistic expression’ that is only enjoyed by the old, the white and the rich. Its place as leading form of musical entertainment has been taken by modern genres such as pop, rock and rap that generally do not share the musical complexity of much of classical and romantic music that used to dominate concert halls.

It is clear that the interest and enjoyment of classical music has decreased over the years, most prominently in today’s youth, despite the increasing accessibility through platforms such as Spotify and YouTube. However, just because interest has lowered does not mean that “classical music is irrelevant to today’s youth” as radio 1 DJ ‘Kissy Sell Out’ publicly argued[1]. It is important that those with a platform in the music industry challenge the notion that classical music is only for a select group of elites as there is so much to be gained from being engaged with classical music, from education to in media to understanding successful music in the modern world.

An area in which classical music is of utmost importance is in education. Headlines such as “listening to an hour of Mozart a day can make your baby smarter” outlining the so called ‘Mozart effect’ frequently dominate the press. This longstanding myth of listening to Mozart as an infant correlating to intelligence has since been debunked as having little scientific merit.

Above: DJs including William Orbit and DJ Tiesto have famously remixed classical music, including Samuel Barber’s Adagio for Strings. Does this make the original more relevant?

However, there is evidence behind the notion that classical music has a positive effect on brain development and wellbeing. A study undertaken in 2014 by Zuk, Benjamin, and Kenyon found that adults and children with musical training exhibited cognitive advantages over their non-musically-trained counterparts. Adults with prior musical training performed better on tests of cognitive flexibility, working memory, and verbal fluency; and the musically-trained children performed better in verbal fluency and processing speed. The musically-trained children also exhibited enhanced brain activation in the areas associated with ‘executive functioning’ compared to children who had no previous musical training.

An additional study at the National Association for Music Education as well as researchers from the University of Kansas, found that young participants in music programmes in American High Schools associated with higher GPA, SAT and ACT scores, IQ, and other standardized test scores, as well as fewer disciplinary problems, better attendance, and higher graduation rates

[1] https://www.independent.co.uk/arts-entertainment/classical/features/radio-1-dj-kissy-sell-out-classical-music-is-irrelevant-to-todays-youth-2282561.html 23/03/18

These scores can have great impacts on future quality of life as they directly contribute to which collage one is able to attend as well as future jobs leading to income.

Another important use of classical music is in modern day media. Film music is a genre that directly stems from classical music. Its widespread use in movies and television means classical music is constantly permeating our daily lives and it would be naïve to pretend it is irrelevant.

Film music serves several purposes in films including enhancing the emotional impact of scenes and inducing emotional reactions in viewers.  The effects are widespread and particularly evident when watching a scene without the accompanying music. For example, watching the famous shower scene from Hitchcock’s Psycho without Herrmann’s music makes the scene appear almost comical and certainly lacks the fear and suspense the scene is meant to evoke.

The film music industry is very successful especially among younger generations, with film music scoring the highest number of downloads of instrumental music. Similarly, the videogame music industry has recently taken off in terms of popularity and recognition within the music community. Videogame music has striking similarities to both film music and elements of classical music with the main difference being it must be able to repeat indefinitely to accompany gameplay. The Royal Philharmonic Orchestra recently announced that it was to play a PlayStation concert to celebrate videogame music. James Williams, director of the RPO describes the planned concert as “signpost for where orchestral music is expanding”.

Whilst the music itself is not classical, it uses many elements of classical composition and is significantly influenced by it. It is arguably the most similar genre to music of the classical era in the modern day. This shows how it is not always obvious where derivatives of classical music can appear in the media of young people, yet the stigma is still very present. James Williams argues that if classical music rebranded to ‘orchestral music’ to include film and videogame popular music, it would help to destigmatise the term. Classical music is vital as the basis of these new and expanding genres of music that are very popular among younger generations.

Within society, there are many other instances in which classical music is used and very relevant, although not in its original context. In advertising, classical music and derivatives of classical music are widely used in order to promote specific product and target specific groups of people. A 2014 study from North Carolina State University shows how the correct musical soundtrack in an advert can “increase attention, making an ad more likely to be noticed, viewed, and understood; enhance enjoyment and emotional response; aid memorability and recall; induce positive mood; forge positive associations between brands and the music through classic conditioning; enhance key messages; influence intention and likelihood to buy”.

The brain has evolved to encode emotional memories more deeply that non-emotional ones and memories formed with a relevant, resonant musical component are stored as emotional memories. This means that adverts with suitable music are more likely to be remembered and acted upon. Clearly, regardless of whether or not classical music is actively listened to by young people, it plays a very active part in our society and therefore cannot be labelled as being irrelevant.

Above: Film music for Harry Potter and the Philosopher’s Stone being performed in real time alongside the movie.

Despite classical music being stereotypically more popular within older social circles, it is still very relevant for today’s youth, whether in or out of its traditional context. Claims that the popularity of classical music is decreasing can be disproved if the definition of classical music is expanded to include similar genres such as in film, videogames and advertisement which are all very popular and relevant, in addition to the huge benefits classical music has on the cognitive development of young people. Therefore, it cannot be argued that classical music is irrelevant to today’s youth.

Further Reading:

This is your brain on music – Daniel Levitin, Dutton Penguin, 2006

https://www.gramophone.co.uk/blog/editors-blog/the-relevance-of-classical-music

https://www.theguardian.com/music/2009/apr/02/classical-music-children

Have a listen to Barber’s Adagio for Strings, and the remixes by Orbit and Tiesto, below:

Barber: https://www.youtube.com/watch?v=N3MHeNt6Yjs

Orbit: https://www.youtube.com/watch?v=VIbIHxKh9bk

Tiesto: https://www.youtube.com/watch?v=8CwIPa5VM18

Euripides: a misogynist or a prototype feminist? – 07/09/18

Anna (Year 13) explores the works of Euripides and endeavours to establish whether he was a feminist through analysis of his plays.

Often regarded as a cornerstone of ancient literary education, Euripides was a tragedian of classical Athens. Along with Aeschylus and Sophocles, he is one of the three ancient Greek tragedians for whom a significant number of plays have survived. Aristotle described him as “the most tragic of poets” – he focused on the inner lives and motives of his characters in a way that was previously unheard of. This was especially true in the sympathy he demonstrated to all victims of society, which included women. Euripides was undoubtedly the first playwright to place women at the centre of many of his works. However, there is much debate as to whether by doing this, Euripides can be considered to be a ‘prototype feminist’, or whether the portrayal of these women in the plays themselves undermines this completely.

Let us first consider Medea. The play focuses on the eponymous heroine, and centres around her calculated desire for revenge against her unfaithful husband, Jason, which she achieves by killing his new wife and her own two children, then fleeing to start a new life in Athens. Medea is undoubtedly a strong and powerful figure who refuses to conform to societal expectations, and through her Euripides to an extent sympathetically explores the disadvantages of being a woman in a patriarchal society. Because of this, the text has often been read as proto-feminist by modern readers. In contrast with this, Medea’s barbarian identity, and in particular her filicide, would have greatly antagonised a 5th Century Greek audience, and her savage behaviour caused many to see her as a villain.

This negative reception of Euripides’ female characters was echoed in the Greek audience’s response to Euripides’ initial interpretation of the Hippolytus myth, in which Aphrodite causes Phaedra, Hippolytus’ stepmother, to fall in love with her stepson, which has horrific consequences. It is believed that Euripides first treated the myth in a play called ‘Hippolytus Veiled’. Although this version is now lost, we know that he portrayed a shamelessly lustful Phaedra who directly propositioned Hippolytus on stage, which was strongly disliked by the Athenian audience. The surviving play, entitled simply ‘Hippolytus’, offers a much more even-handed and psychologically complex treatment of the characters: Phaedra admirably tries to quell her lust at all times. However, it could be argued that any pathos for her is lost when she unjustly condemns Hippolytus by leaving a suicide note stating that he raped her, which she does partly to preserve her own reputation, but also perhaps to take revenge for his earlier insults to her and her sex. It is debatable as to whether Euripides is trying to evoke sympathy for Phaedra and her unfortunate situation, or whether through her revenge she can ultimately be seen as a villain in the play.

However, if you look at Hecuba, Andromache, and the Trojan Women, we see how the evils of war have a grave effect on women, and in his play ‘Ion’, he sympathetically portrays Creusa, who was raped by Apollo and forced to cover up the scandal. Although some believe it is difficult to fully label Euripides as a feminist, he nonetheless understood the complexities of female emotion in a new and revolutionary way, whether the audiences, from both then and now, view his female characters as heroines or as villains.

Links and further reading:

https://www.libertarianism.org/columns/ancient-greeces-legacy-liberty-euripides-woes-woman

https://etd.ohiolink.edu/!etd.send_file?accession=ouashonors1428872998&disposition=inline

Nanotechnology and its future in medicine – 07/09/18

Maya (Year 11), discusses the uses of nanotechnology in medicine, thinking about how far it has come and helped doctors. She also considers the dangerous aspects of using such small technology and the future benefits it may bring.

Technology in medicine has come far and with it the introduction of nanotechnology. Nanotechnology is the action of manipulating structures and properties at an atomic and molecular level as the technology is so small; it being one-billionth of a metre. This technology has many uses such as electronics, energy production and medicine and is useful in its diverse application. Nanotechnology is useful in medicine because of its size and how it interacts with biological molecules of the same proportion or larger. It is a valuable new tool that is being used for research and for combatting various diseases.

In medicine, nanotechnology is already being used in a wide variety of areas, the principle area being cancer treatment. In 2006 a report issued by NanoBiotech Pharma stated that developments related to nanotechnology would mostly be focused on cancer treatments. Thus, drugs such as Doxil, used to treat ovarian cancer will use nanotechnology to evade and surpass the possible effects of the immune system enabling drugs to be delivered to the disease-specific areas of the body. Nanotechnology is also helping in neuroscience where European researchers are currently using the technology to carry out electrical activity across dead brain tissue left behind by strokes and illnesses. The initial research was carried out to get a more in-depth analysis of the brain and to create more bio-compatible grids (a piece of technology that surgeons place in the brain to find where a seizure has taken place). Thus, it is more sophisticated than previous technologies which, when implanted, will not cause as much damage to existing brain tissue.

Beyond help in combatting cancer and research, nanotechnology is used in many areas in medicine from appetite control to medical tools, bone replacement and even hormone therapy. Nanotechnology is advancing all areas of medicine with Nano-sized particles enhancing new bone growth and additionally, there are even wound dressings that contain Nano-particles that allow for powerful microbial resistance. It is with these new developments that we are revolutionising the field of medicine, and with more advancements, we will be able to treat diseases as soon as they are detected.

Scientists are hoping that in the future nanotechnology can be used even further to stop chemotherapy altogether; fighting cancer by using gold and silica particles combined with nanotechnology to bind with the mutated cells in the body and then use infra-red lasers to heat up the gold particles and kill the tumour cells. This application would be beneficial as it would reduce the risk of surrounding cells being damaged as the laser would not affect them as much as the chemotherapy would.

In other areas, nanotechnology is further developing with diagnostics and medical data collection. This means that by using this technology, doctors would be able to look for the damaged genes that are associated with particular cancers and screen the tumour tissue faster and earlier than before. This process involves the Nano-scale devices being distributed through the body to detect chemical changes. There is also an external scan by use of quantum dots on the DNA of a patient which is then sequenced to check if they carry a particular debilitating genome, therefore providing a quicker and easier method for doctors to check in detail if a patient has contracted any illnesses or diseases. Furthermore, doctors will be able to gain a further in-depth analysis and understanding of the body by use of nanotechnology which surpasses the information found from x-rays and scans.

While this is a great start for nanotechnology, there is still little known about how some of the technology might affect the body. Insoluble nanotechnology for example, could have a high risk of building up in organs as they cannot diffuse into the bloodstream. Or as the nanoparticles are so small, there is no controlling where they could go, which might lead to Nano-particles entering cells and even their nuclei, which could be very dangerous for the patient. The science and technology committee from the House of Lords have reported concerns about nanotechnology on human health, stating that sufficient research has not been conducted on “understanding the behaviour and toxicology of nanomaterials” and it has not been given enough priority especially with the speed at which nanotechnology is being produced.

Nanotechnology is advancing medical treatment at a rapid rate, with new innovative technologies approved each year to help combat illnesses and diseases. Whilst more research needs to be conducted, the application of Nano-medicine will provide a platform of projected benefits that has potential to be valuable. Overall with the great burden that conditions like cancer, Alzheimer’s, HIV and cardiovascular diseases impose on the current healthcare systems, nano-technology will revolutionise healthcare with its advances techniques in the future as it progresses.

@Biology_WHS 

Glamour and Hedonism: Why the American Jazz Age Still Intrigues Us

Laura (Year 11) explores what makes the Jazz Age a significant time in America’s history and how it has been preserved through music and literature.

The American Jazz Age, or the “Roaring Twenties”, brings to mind many images of feathers, flapper dancers and flamboyance. As the 1920s were characterised by rapid stock market expansion, successful Americans spent more, and flaunted their wealth, throwing extravagant parties. Reminders of the era cannot be avoided, as it inspires fashion, films and music of today. F. Scott Fitzgerald’s 1925 novel The Great Gatsby captured the essence of the time and offers a paradigm of the jazz age. When Baz Luhrmann took on the challenge of adapting it for film, it made $353.6 million at the box office, as audiences were captivated by the romance of the period.

Whilst the 1920s saw people move away from the austere and unpromising life during the Great War, they also brought new changes and difficulties with them. This new America had lost faith in its organisation and structure, having become disillusioned by war and patriotism. The parties and indulgence reflected newfound individualism as traditional values were left behind. Many were critical of the more frivolous lifestyle in cities, as ideas of morality seemed to shift. Prohibition, the 1920 ban on alcohol, seemed to only encourage more drinking in the clandestine speakeasies, and organised crime and bribery were rife. But the era was also characterised by modernisation and greater liberation, especially for women. The 19th Amendment was changed in 1920, giving women the vote, and social changes followed as women in the workplace became more of a norm and gender roles were questioned. Even fashion became more liberating as short skirts and hair became popular.

The jazz music that fuelled the parties of the rich and powerful in 1920s America first came from the African-American communities of New Orleans and had its origins in blues. With a more free, improvisational style, it broke musical norms whilst social conventions were being dismantled in America. With better recording of music during the mid-1920s, this new style spread quickly, and radio broadcasting allowed more rapid popularisation of the genre, as it reached people of all ages and classes. Although the US was still a place of deep-rooted racism and xenophobia, and many conservatives feared the influence of “the devil’s music”, jazz’s popularity was a step towards better inclusion in American society. When Luhrmann made his adaptation of The Great Gatsby, the music was a key element of the film. Modern hip hop and traditional jazz were both a part of the soundtrack. It cleverly blended music that evoked the era with new music that allows the modern audience to experience what it was like to listen to something completely new and unheard. Luhrmann said that “the energy of jazz is caught in the energy of hip-hop”. Check out the Jazz Spotify playlist on the Music Department Spotify here.

Ernest Hemingway, Gertrude Stein and F. Scott Fitzgerald are among the authors that have helped to preserve the excitement and intensity of the Jazz Age in their writing and are part of the “Lost Generation” writers, who came of age during the Great War. Main themes in their writing included the opulence and wealth of the 1920s, but also the damaging effects of hedonism and disillusionment. Idealised versions of the past are often seen in writing of the era, reflecting on how the indulgence and enjoyment was overwhelming and even put individuals out of touch with reality. Fitzgerald describes one of Jay Gatsby’s parties:

“The lights grow brighter as the earth lurches away from the sun, and now the orchestra is playing yellow cocktail music, and the opera of voices pitches a key higher. Laughter is easier minute by minute, spilled with prodigality, tipped out at a cheerful word.”

The giddy description shows an uncomfortable confusion of the senses, as the narrator, Nick Carraway, discovers the exciting city life. However, Fitzgerald also reveals a world damaged by war, as the “valley of ashes” in the novel represents the effects of industrialisation and modernisation on the less wealthy, and the social inequality of the time. Carraway, having served in the First World War, notes that Jordan Baker had an “erect carriage which she accentuated by throwing her body backward at the shoulders like a young cadet”, his vision is clouded by experiences of war. The literature of the jazz age endures because it shows not only the glamour and thrill of the period, but also offers sobering reflections on the price of the new lifestyle.

The sparks of wealth and excitement of the Roaring Twenties were stamped on by the Wall Street Crash of October 1929 and were extinguished abruptly. As the terrible poverty of the Great Depression began, Fitzgerald wrote “Echoes of the Jazz Age”, recalling the earlier, more prosperous times.

“It bore him up, flattered him and gave him more money than he had dreamed of, simply for telling people that he felt as they did, that something had to be done with all the nervous energy stored up and unexpended in the War.”

It is no surprise that the Jazz Age has aged so well. The excitement and romance of the period has captivated readers and audiences, and this formative period of American history is not forgotten.

Men writing about women: how male authors have depicted female characters

Lydia (Year 11) investigates the portrayal of women in literature, a field that has largely been controlled by the male voice, and how this has changed throughout the centuries.

The literary world has always been (and remains) dominated by men. As male writers create their female characters, they often fall short of capturing the interesting, vivid complexity of womanhood which we, as women, know to be reality. Since Shakespeare’s heyday, it’s fair to say that the role of women in society has changed significantly; but how has this change affected how male writers portray women in literature, if it has at all?

When Shakespeare’s Macbeth was performed (circa. 1602), women were expected to be submissive to their husbands, punished for being ‘scolds’ or ‘nags’. Fear of women’s speech was prevalent, spread by imperious treatises. The extremely popular treatise, Anatomy of a Woman’s Tongue, claimed

“A woman’s tongue it is the devil’s seat;

and that it is a most pernicious lyar,

a backbiter and a consuming fire”

summing up misogynistic attitudes of the Jacobeans with a catchy rhyme. And, of course, there were the witch burnings, vast numbers of women executed for being transgressive, reclusive or powerful.

This attitude towards women is noticeable in Shakespeare’s work as he links the powerful women in Macbeth to the world of spirits and demons. Through studying Macbeth and watching various productions, such as that performed at the National Theatre, it is easy to be struck by how Shakespeare presents women as a manipulative force, blaming them for the immoral actions of men. The misogynistic attitude behind this becomes obvious when compared to how Shakespeare presents Macbeth himself, murderer of the sleeping king and his own close friend, as a basically good, if slightly unhinged, man.

Another greatly beloved male writer is Charles Dickens. By the time Great Expectations was published in 1860, the Jacobeans and their witches were centuries dead, though reductive attitudes towards women lived on. The continued ownership of women by men and the surprising lack of social progress in the centuries between Macbeth and Great Expectations is revealed by similarities between Dickens’ and Shakespeare’s portrayal of women.

In Macbeth, Shakespeare portrays Lady Macbeth’s desire for power as unnatural and dangerous as she orders spirits to remove remorse and conscience from her body and replace her breast-milk with acid. Dickens uses uncannily similar imagery of mutilation and hardening in Great Expectations as the proud Estella claims “I have no heart … I have no softness there, no—sympathy—sentiment—nonsense” and Miss Havisham echoes “I stole her heart away and put ice in its place.” This idea that all women who transgress the role of tender, servile femininity must be unnatural perversions of nature is used by both Shakespeare and Dickens, revealing that, despite the centuries passed, men continued to hold the same views regarding the role of women in society.

Dickens is famed for his portrayal of meek, simpering virgins (often paired with epithets such as ‘dear’ and ‘little’). In Great Expectations this trope manifests itself in Clara Barley. Clara is a paradigm of servility, tending constantly to her abusive father yet still managing to appear “natural and winning” and well as “confiding, loving and innocent”.  Dickens reveals Victorian attitudes that women should be submissive, praising Clara’s “modest manner of yielding herself to Herbert’s embracing arm.” The anti-Estella, Clara is just as two-dimensional, a figment of the imagination of the imperious male.

The 20th Century saw great change in how women were viewed in society, earning the vote in Britain in 1918. The end of the 1920s saw the Equal Franchise Act passed, granting equal voting rights to women. Did this rapid progress, and the surging momentum of the feminist movement, pave the way for a parade of wonderful heroines written by men?

It seems not, if John Steinbeck’s Of Mice and Men, best-seller of the 1930s, is anything to go by, featuring not one named female character, only ‘Curley’s wife’, the self-absorbed, cruel and teasing caricature of female shallowness and naivety. Steinbeck punishes this woman for her crimes of promiscuity and stupidity with a broken neck, echoing the methods of past writers; Shakespeare delivers Lady Macbeth a grizzly suicide and Estella is condemned to a life of abuse at the hands of Drummel. Though Steinbeck sticks fast to the well-trodden tropes of two-dimensional femininity, the literary world has a growing female voice in this decade as Daphne du Morier, Agatha Christie and Virginia Woolf begin penning novels of their own, featuring swathes of heroines.

However, men have, on occasion, written brilliant female characters: for example Shakespeare’s Juliet and Tolstoy’s Anna Karenina, but these are exceptions to a centuries old pattern in which women are written as either fantastical paragons of innocence or cruel monsters. The recent twitter trend, which inspired this article, asserts that this pattern marches onwards to the present day. So as far as writing funny, interesting, realistic women goes, I guess it’s down to us.

Follow @English_WHS on Twitter

Twilight vs Jane Eyre: what separates the two?

Claire (Y11) investigates the relationship between Twilight and two classic novels, Jane Eyre and Wuthering Heights, to discover what makes them similar but more importantly what divides them.

It is a truth universally acknowledged that books written about vampires are almost exclusively terrible, and in no case is this truer than in that of Twilight. Widely considered to be one of the worst books ever written, it is generally believed to have absolutely no literary merit whatsoever, despite the story itself being age-old and popular under many other circumstances. Among those circumstances is Charlotte Brontë’s Jane Eyre, and to a lesser extent, Emily Brontë’s Wuthering Heights. What is interesting is that both of those books are hailed as literary classics and taught on many school curriculums, despite the fact that in terms of storyline, there is very little that actually separates them from Twilight. So why is it that the Brontë sisters’ novels are deemed masterful works of literature, whilst Twilight is relegated to the trash pile?

In many ways, the books are very similar. All three have a brooding, Byronic male protagonist – Edward Cullen, Edward Rochester, and Heathcliff – who falls in love with a female protagonist – Bella Swan, Jane Eyre, and Catherine Earnshaw. There are definitely some differences in the traits of these female protagonists, but at a basic level, they are addicted to their counterparts in a ‘passionate’ love which actually seems more like an abusive relationship. Catherine says, “I am Heathcliff”, a sentiment very similar to Bella’s expression that “(her) life was about (Edward)” and although Jane famously states that she is a “free human being with an independent will”, apparently marking her out as not being as addictively in love as Catherine and Bella, she does eventually fall prey to the same obsession.

All three stories are also set in isolated locations, cut off from society and rendering the romance central to the plot as there is very little outside influence. Bella finds herself in the small American town of Forks, where Edward and his family take centre stage as local curiosities. Jane is relegated to Ferndean Manor, which itself is located deep within the forest, and the other characters have little influence on her relationship with Mr Rochester. And Wuthering Heights, of course, is set in the iconic Yorkshire moors, where the surroundings seem to reflect the dynamic of the relationship – harsh, inhospitable, and inexplicably alluring. This feeling of isolation, this detachment from the outside world is what allows all three romances to blossom into the obsessive and damaging relationships they ultimately become.

There are other similarities, of course. All three female protagonists are ‘othered’ by society, be it by virtue of simply being a teenage girl or by their feminist principles. All three are ceaselessly self-obsessed; Jane dwells on herself endlessly, Bella is conscious of her every flaw, and although Catherine seems far less self-deprecating, that does not mean she is not selfish. And throughout all three novels runs a feeling of the scandalous, the inappropriate and the exciting. Edward is, of course, a vampire, and over 100 years older than Bella is. Mr Rochester is Jane’s employer, and there is a gap in class and age. Heathcliff is Catherine’s adopted brother, and implied to be Romani in origin – certainly, he is not white. These factors tie common threads between the stories, linking their characters.

What is it that renders Jane Eyre and Wuthering Heights simply better novels than Twilight?

So if the characters are essentially the same, the setting is essentially the same, and the basic storyline is essentially the same, what is it that renders Jane Eyre and Wuthering Heights simply better novels than Twilight? One of the most obvious factors is the context. The Brontës were women writing books at a time when women generally didn’t, and although their novels were published under pseudonyms, it does not detract from the fact that they rebelled against the norm and published works of fiction which went on to become classics. It would have taken extraordinary courage and talent to write and then attempt to publish such literature, and it shows in the novels themselves. By contrast, Meyer was able to sit down and simply write, and then have her book published with relative ease. This means Twilight, although not altogether terrible, is simply very boring. It is not a novel of challenge and struggle, and that too shows in the writing.

Another thing that marks the Brontë sisters’ works from Meyer’s is the fact that Wuthering Heights and Jane Eyre are, objectively, better-written. The prose style of Twilight is simple, bland and unremarkable – the descriptions are extremely mundane and Bella’s simpering first-person style is insipid at best and exasperating at worst. In contrast, both Brontë sisters offer rich and interesting writing, replete with elegant descriptions of landscape and emotion, and when Jane narrates, her first-person voice is, although occasionally irritating, undoubtedly compelling. Wuthering Heights is also a much more complex story, spanning generations of characters without losing track of the development of any, and although it is not the polished masterpiece that Jane Eyre is, it is certainly a triumph. Twilight is, to put it kindly, not.

None of this is to say that Twilight isn’t worth reading. All literature has an inherent anthropological value, in that it teaches us about the culture of the time, and Twilight does a very good job of informing readers that in the mid-2000s, people were obsessed by vampires. The undeniable fact, however, is that it is not a particularly good book, and that for all its similarities to the Brontë sisters’ masterpieces, it does not hold up well in comparison.

@English_WHS 

Does honesty have a place in the law?

Lilly (Y11), discusses the importance of honesty in law and whether the practice of Law and the Justice System needs honesty to function.

When considering this question, it is tempting to immediately think of the skilful, yet exploitative nature in which lawyers can bend any law or lunge into any loophole which will win their case, thus concluding that honesty most definitely does not fit into law. This narrative has been one that is heard in stories stretching back centuries; if we stretch back three, to the words of 18th century poet John Gay, he describes this as follows:

I know you lawyers can with ease,

Twist words and meanings as you please;

That language, by your skill made pliant,

Will bend to favour every client;

That ’tis the fee directs the sense,

To make out either side’s pretence.

Now, it is incorrect to say that there is no truth in this. If we think about why lawyers are hired in the first place, it is obviously to find ways in which the law can be used to favour their client. It is doubtful that many people would hire a lawyer and tell them to find whatever they believed to be the most honest plan of action to take, rather than what will let them walk free. In other words, a lawyer’s job in essence is to manipulate the legal terms and conditions in order to present a client who seems like an innocent person in front of a jury, but perhaps looks more like a thief when they close the doors of the court behind them.

 

 

The law is a set of codes, and the program that the codes feed in to allows for our society to be regulated and run smoothly.

 

 

To really understand this question, we have to strip the very concept of law back to its basics. The Oxford Dictionary recognises law as “the system of rules which a particular country or community recognises as regulating the actions of its members and which it may enforce by the imposition of penalties.” In essence, the law is a set of codes, and the program that the codes feed in to allows for our society to be regulated and run smoothly. This view of society as fairly binary, (right or wrong), does not allow any scope for honesty, therefore, it could be said that the law is much like a computer program with lawyers being the coders, operating the whole show.

However, with the risk of sounding like we live in some sort of dystopian novel controlled by an IT department of lawyers, lets look at why this idea is flawed. This stripped-down view of law to its basic components is very different to our unfortunately much less simple reality, where honesty must have a purpose in law. If you cut right to the heart of law, you see that honesty is in fact an integral part of its composition. It’s no mistake that one swears to tell “the truth, the whole truth, and nothing but the truth” at a trial. This reflects not just the presence, but the necessity of truth in law.

Yet even so, a necessity for honesty doesn’t necessarily translate to there being a guarantee of honesty. Every single human is dishonest at some point (if not many points) during their lives. On the whole, this dishonesty manifests itself in the form of white lies, which are, on the whole, harmless. It is only for a smaller portion of society that these present themselves as serious crimes. But, this is the important part. The very system of law only works when assuming that human beings are honest most of the time. If we were the opposite, i.e. dishonest most of the time, the system of law as we know it would not be able to function, and instead we would probably be living under a system where the means of control were strictly military. But conversely, if human beings were intrinsically honest, this would mean that we wouldn’t need law at all, and would probably live more in something similar to the computer analogy mentioned earlier. The assumption that we are honest most of the time means that the law functions by the system of proving someone to be dishonest, reflected in the well-known phrase “innocent until proven guilty”. Can you imagine if humans were recognised to be dishonest most of the time by the law? This would not only blur the lines as to if a law that was made was good or bad, but also make it incredibly hard to distinguish between a law being created to prosecute or persecute (which would probably result in martial law).

 

If human beings were intrinsically honest, this would mean that we wouldn’t need law at all.

 

In short, there isn’t a straightforward answer to this question. Many ideas have been suggested in disagreement with honesty’s place in law, due to the self-interested, survivalist nature of humans, which determines that even if honesty did have a place in law, this honesty is easily undermined by the fact that humans are driven by self-preservation, which usually doesn’t coincide with being completely honest. But even so, it is undebatable that the level in which we regard honesty’s significance in law has a huge influence on our society.

 

Castles: architecture and story.

Wimbledon High History

Daisy (Y12) explores the significance of the castles that are dotted all over the British Isles, arguing that we should look beyond their architectural genius to study the stories behind them.

Castles are undoubtedly the most important architectural legacy of the Middle Ages. In terms of scale and sheer number, they undermine every other form of ancient monument and dominate the British landscape. What’s more, the public has an enduring love affair with these great buildings since they play an intrinsic role in our heritage and culture, meaning that over 50 million people pay a visit to a British castle each year.

Arundel Castle, West Sussex

The period between the Normans landing at Pevensey in 1066 and that famous day in 1485 when Richard III lost his horse and his head at Bosworth (consequently ushering the Tudors and the Early Modern period in to England), marks a rare flowering of British construction. Whilst the idea of “fitness for purpose” was an important aspect of medieval architecture, the great castles of the era demonstrate that buildings had both practical and symbolic use.

One of the most iconic forms of medieval castle, was that of the Motte and Bailey which ultimately came hand in hand with the Norman conquest of 1066. The Bayeux Tapestry eloquently depicts the Norman adventurers landing in Pevensey and hurrying to occupy nearby Hastings. The men paused on the Sussex coast and are said to have indulged in an elaborate meal whereby they discussed their plans for occupying the highly contested country. The caption of the Tapestry notes “This man orders a castle to be dug at Hastings”, which is followed by a rich scene illustrating of a group of men clutching picks and shovels heading to start their mission- castles were the Normans first port of call. In the years that followed, castle-building was an intense and penetrating campaign, with one Anglo-Saxon Chronicler stating that the Normans “built castles far and wide throughout the land, oppressing the unhappy people, and things went from ever bad to worse”. It was vital for William to establish royal authority over his newly conquered lands, and thus the country saw the mass erection of potentially more than 1,000 Motte and Bailey castles. With these buildings, speed was of the essence, and so the base, or Bailey, was made of wood, while the Keep (which sat on the top of the Motte) was relatively small and made of stone.

A drawing displaying a Motte and Bailey Castle.

Despite the fact that these castles obviously served a defensive purpose, with the elevation from the Motte providing the Norman noble with a panoramic view of the surrounding countryside, their symbolic purpose was arguably of greater significance. The erection of such a large number of castles set out to alter the geopolitical landscape of the country for ever and made sure that the Norman presence would be felt and respected. Figuratively, the raised Keep acted as a physical manifestation of the Norman’s dominance over the Anglo-Saxons with the upwards gradient effortlessly representing the authoritative function and higher superiority of the Lord, which was the fundamental aspect of the feudal system. Additionally, as many academics tend to emphasise, the fact that they were often located in order to command road and river routes for defensive purposes meant that their possessors were also well placed to control trade, and thus could both exploit and protect mercantile traffic.

Another key development in castle building occurred approximately 200 years after the Battle of Hastings, during the reign of Henry II. Prior to Henry’s accession, England had been burdened by civil war and a period known as the “anarchy” under his predecessor, King Stephen. Taking advantage of the confusion and lawlessness displayed throughout Stephen’s reign, the barons had become fiercely independent. They had not only issued their own coinage but also built a significant number of adulterine castles and had unlawfully adopted a large sum of the royal demesne. As a result, in order to establish royal authority, Henry set about demolishing these illegal castles en-masse, on which he expended some £21,500, and further highlighted his supremacy. Thus, as seen under the Normans, castles were again used as a means of establishing royal authority. From here the British landscape was significantly altered for a second time, which, in a period lacking efficient communication and technologies would have been a highly visible emblematic change impacting all members of society from the richest of the gentry to the everyday medieval peasant.

As a result, whilst is it important to appreciate the architectural styles and physical construction of medieval castles, I believe it is vital to acknowledge their symbolic nature, and appreciate how through the introduction of such fortresses, peoples’ lives would change forever.

Follow @History_WHS on Twitter.

Should standardised exams be exchanged for another form of assessment?

wimbledon logo

Jasmine (Year 11) explores the merits and weaknesses of exams as the formal assessment of intelligence, discussing whether an alternative should be introduced that suits all students.

Exams – the bane of existence for some but an excellent opportunity to excel for others. Thought to have been founded in China, with the use of the standardised “imperial exam” in 605 AD, they are the education system’s way of assessing the mental ability and knowledge of students whilst also creating a practical method of comparison to others in the country. They are therefore an important factor and indicator for employers. But does this strict, tight method really work for assessing intelligence or is it just a memory game that is only achievable for a select few?

I asked 80 students in a survey if they think that exams should be exchanged for another form of assessment and the results concluded that 78% agree that they should. However, when asked about their reasoning, it was mostly due to stereotypical dislike for the stressful period. Some who agreed with the statement also mentioned the unrealistic exam conditions that would not occur in daily life. An example was set forth that during a language oral exam a great amount of pressure is put on the students causing them to become nervous and not perform to their best ability. However, in a real-life conversational situation they would not have to recite pre-prepared answers and the pressure would be taken off so the conversation would flow more naturally. This shows that although someone may have real fluency and talent for the language, their expertise will not be notified and rewarded accordingly

Among many students, examinations are accused of being memory tests that only suit a certain learning style; and the slow abolishment of coursework at GCSE level is contributing to this. This could be shown by the fact that many people in the country have learning difficulties such as dyslexia. These students may be particularly bright and diligent workers however, their brains do not function in the way exams rely on them to. Nonetheless, if they are put in front of a practical task that they have learned to do through experience, they are deemed to be far more knowledgeable and perceptive. Studies show that by learning something consistently for a long period of time it stays in our memory but though it is important to ingrain essential facts into our brains, especially at GCSE level, GCSEs are mostly comprised of learning facts over a period of around 2-3 years and then a final exam at the end; which does not particularly show consistent learning and is more just an overflow of information.

Stress levels caused by the lead-up, doing, and waiting period for results that subsequently follows are also a major factor in the argument that traditional standardised tests should be augmented. According to the NSPCC, from 2015-2016 there was a 21% increase in the likelihood of counselling sessions being for 15-18 year olds affected by exam stress many of whom would be doing GCSEs and A Levels. Some say that the stress these tests cause is necessary for success and mimics the stresses of the real world; but how essential are some of these exams like non-calculator Maths papers when nowadays most people of have calculators on their phones? Exams are also said to create healthy competition that prepares people for the struggles and competitive nature of the modern working world and also motivates students, but can’t this be done with another form of assessment that is more suited to the individual student?

However, the use of different approaches to examination may, in fact, lead to the risk of the test being corrupted. This would mean that grading would be mainly subjective and there would be more scope for unfair advantage for some rather than others. The restrictive nature of our exams today with a set time, set paper and set rules does ensure that fairness is a priority but is the actual exam really the most equal way to test so many different students?

Standardised exams are not the best way of determining the knowledge and intelligence of students around the world. This is due to the stress and pressure they cause, the fact that they are only appropriate for certain learning styles and their ill comparison to real life events in the working world. Changing the form of these assessments may, however, cause grades to be unreliable. My suggestion would be smaller and more practical examinations throughout the course that all contribute to the final grade as this puts less pressure on the students and helps those who rely on different learning strategies to excel and demonstrate their full potential.