Hotspotting: the conservation strategy to save our wildlife?

Globe

Alex (Year 11) investigates whether the strategy of hotspot conservation is beneficial to reducing mass extinction rates, or if this strategy is not all it claims to be.

Back in 2007, Professor Norman Myers was named the Time Magazine Hero of the Environment for his work in conservation with relation to biodiversity hotspots. He first came up with his concept of hotspot conservation in 1988, when he expressed his fears that ‘the number of species threatened with extinction far outstrips available conservation resources’. The main idea was that he would identify hotspots for biodiversity around the world, concentrating conservation efforts there and saving the most species possible in this way.

Myers’ fears are even more relevant now than 30 years ago. According to scientific estimates, dozens of species are becoming extinct daily leading to the worst epidemic of extinction since the death of the dinosaurs, 65 million years ago. And this is not as naturally occurring as a giant meteor colliding with the Earth – 99% of the IUCN Red List of Threatened Species are at risk from human activities such as ocean pollution and loss of habitat due to deforestation amongst other things. It is therefore crucial that we act now to adopt a range of conservation strategies to give our ecosystems a chance at survival for future generations.

To become accepted as a hotspot, a region must meet two criteria: firstly it must contain a minimum of 1,500 endemic (native or restricted to a certain area) plant species, and secondly it must have lost at least 70% of its original vegetation. Following these rules, 35 areas around the world ranging from the Tropical Andes in South America to more than 7,100 islands in the Philippines and all of New Zealand and Madagascar, were identified as hotspots. These areas cover only 2.3% of Earth’s total land surface but contain more than 50% of the world’s endemic plant species and 43% of endemic terrestrial bird, mammal, reptile and amphibian species, making them crucial to the world’s biodiversity.

This concept has been hailed as a work of genius by conservationists and has consequently been adopted by many conservation agencies such as Conservation International – who believe that success in conserving these areas and their endemic species will have ‘an enormous impact in securing our global biodiversity’.

The principal barrier to all conservation efforts is funding, as buying territories and caring for them costs a lot of money, which is primarily raised from businesses, governments and individual donors. Most of this funding is raised through campaigns focused on charismatic megafauna such as the penguin or the snow leopard. These types of campaigns motivate people as they feel a closer connection to these animals and they seem to really be making a difference in conserving these species. When conservation is done on a larger, regional level, there is less of the gratification that comes along with donating money as there is less control, felt by the donors, over the work done for conservation. Through the identification of 35 specific areas to concentrate funds towards, this reconnects the public, as well as larger companies and local governmental bodies, to the projects, thereby encouraging more donations. It is for this reason that hotspot conservation has received £740 million, the largest amount ever assigned to a single conservation strategy.

Although the 35 areas identified are relatively widespread and well-funded for their conservation efforts, this strategy has been criticised for its neglect of other crucial ecosystems. First of all, there are no hotspots in northern Europe and many other areas around the world, neglecting many species of both flora and fauna. Also, as the criteria for classification as a hotspot are with reference to endemic plant species, many species of fauna are neglected, from insects to large and endangered species such as elephants, rhinos, bears, and wolves. Furthermore, areas referred to as ‘coldspots’ are ignored. This could lead to the collapse of entire ecosystems following the extinction of key species.

Another major issue with this strategy is that terrestrial environments only make up around 29.2% of the earth’s surface area. The other 70.8% is covered in very diverse (but also very threatened) oceans and seas. Marine environments are overlooked by hotspot conservationists as they rarely have 1500 endemic plant species, as deep oceans with very little light are not the ideal environmental environment for plant growth, and species floating on the top are rarely confined to one specific area, making them not endemic.

So, if even the more successful strategies for conservation are so flawed, is there any hope for the future? I think that yes, there is. Although there is no way to save all the species on earth, identifying crucially important areas to concentrate our efforts on is essential to modern conservation efforts. Hotspot conservation is definitely improving the ecological situation in these 35 areas and so those efforts should be continued, but that doesn’t mean that all conservation efforts should be focussed only on these hotspots. Hotspot conservation should be part of the overall strategy for reduction of mass extinction rates, but it is not the fix-all solution that some claim it is.

Follow @Geography_WHS & @EnviroRep_WHS on Twitter.

Can we hope for junk-free Space?

Leslie in Year 11 discusses the increasing threat of junk in space orbit and therefore the significance of and urgency in removal of such junk, and whether a new experiment, led by the Surrey Space Centre, will provide a potential solution to the crowded orbit.

Since the turn of the 20th century, the rising interest in outer space has resulted in an uncountable amount of space debris. This under-reported phenomenon, also known as space junk or space waste, is the cluttering of the universe with man-made objects, and it has potentially dangerous consequences. But why should it capture people’s attention globally?

Hundreds and thousands of unused satellites from all over the world and fragments of spacecraft (including rocket stages and paint flakes) are in the same orbit, together with the functioning spacecraft. This is because many pieces of unwanted space debris take a long time, even decades, to deorbit and fall back into earth. Clearly, due to rising global interest in space exploration, the chances of collision are growing ever greater.

A report from the U.S. National Research Council in 2011 warned NASA that the ‘amount of orbiting space debris was at a critical level…enough currently in orbit to continually collide and create even more debris, raising the risk of spacecraft failures’. More than half a decade has passed since, and the removal of space debris definitely seems urgent.

A key solution to this issue is the removal of space waste from the atmosphere; this is important as even tiny particles of less than 1cm can have dramatic effects due to the high speed at which they travel and the risk of collisions. Perhaps surprisingly, these particles are a major threat to space walking astronauts and humans aboard spacecraft. Whilst it is important to acknowledge that collisions are unlikely due to space being unimaginably huge, the possible consequences could be dramatic, rendering it absolutely essential to diminish the growing threat posed by space debris.

To demonstrate this point, less than two years ago Sentinel-1A suffered an impact, where an object slammed into one of the solar panels and caused a dent of nearly half meter across. Had the main spacecraft been hit, it would have resulted in serious damage. Holger Krag, Head of ESA’s Space Debris Office at ESOC (European Space Operations Centre), stated, ‘We appear to have survived this unexpected collision with minimal impact on this particular satellite. We may not be so fortuitous next time.’

The leading astrophysics agencies’ announcements have emphasized the critical quantities of space debris and although space travel has always had risks, the rising amounts of space junk puts existing spacecraft under a continuous threat, especially as millions of small particles are untraceable. Encouraging further experiments focusing on the removal of them is necessary, as it is urgently important to come up with a solution and this is putting many space agencies under pressure to find the best solution to this ongoing problem.

The solution may be closer to home than we think! Not too far away from Wimbledon, the ongoing mission RemoveDebris at Surrey Space Centre aims to capture and destroy space debris in low cost initiatives, which will hopefully reduce the risk of future collisions. The experiment, planned to be launched this year, consists of four ways to capture space debris. If these methods turn out to be successful, it will be a step towards a safer orbit for the future. It includes: a net experiment, a VBN (Vision based navigation) experiment, a harpoon and deployable target experiment and a DragSail. The RemoveDebris will carry its own junk and measure the success of their methods in space.

The initial experiment involves capturing the debris by firing a net. When the CubeSat (which is released by RemoveDebris to try to capture the objects), is at a distance of 7m, the net will fire and hit the target. The large surface area enables the CubeSat to deorbit at an accelerated rate, which will hopefully remove the debris from space.

Airbus, an international aerospace company, is involved in a harpoon target experiment and many scientists believe that this could in fact provide the solution to space junk. In the RemoveDebris experiment, a small miniature harpoon is planned to be on board. A DragSail, also on board, is to quicken the de-orbit of the satellite when deployed and to speed up the rate of burning in the Earth’s atmosphere, explained by Surrey Space Centre.

The success of this experiment in removing space debris will lessen the risk of collision. It will create a safer environment for functioning satellites and any space vehicles, especially those with humans aboard. This is an absolutely necessary precaution to take before taking further steps in space exploration, and the success of this experiment will provide a new, innovative way to increase safety in outer space.

Despite this experiment providing hope for a better solution to the problem of space debris, how long it will take to make the orbit safe again is questionable and yet to be answered. Nevertheless, the many experiments being undertaken to help tackle this pressing problem provide some consolation. Although it seems like we are extremely far away from junk-free space, it might not be an impossibility.

Follow @Physics_at_WHS on Twitter.

Artificial Intelligence and the future of work

By Isabelle Zeidler, Year 7.

What is AI, and how will it change our future?

Firstly, so that AI works, there are three key requirements: data, hardware and algorithms. An example of data are the words in a dictionary saved on a computer. You need this because otherwise Google Translate won’t work. Hardware is necessary so that the computer is able to store data. Lastly, algorithms are what many of us know as programming; the function so that we can do something with our data.

The history of AI is longer than we imagine; we have used AI since 1950. Machine Learning (ML) is a kind of AI. We have used ML since 1980. The most modern kind of ML, AI is Deep Learning (DL). Many of us do not know about this, but a lot of us know the companies that use it. One of the most advanced companies in DL are Google and IBM Watson. So why is DL so amazing? ML has some kind of coding of rules given by programmers. DL learns these rules by observation. This is similar to what happens when babies learn to speak – they rely on observing others.

There are four amazing skills which AI can do:

  • computer vision
  • natural language processing
  • complex independent navigation
  • machine learning

Not all AI use all of these abilities. Some examples of computer vision would include the new passport control at the airport. Another example which is very popular is face recognition in an iPhone X or Surface Pro. The second skill is natural language processing. This is the ability to understand language. A relevant example is Alexa. In the future, some call centres will also use AI’s ability to understand language (it has already started). For example, when you call a bank, a robot will be able to answer even complex inquiries, not just tell you the account balance. Complex independent navigation examples are modern technology ideas like drones and planes.

Do you think that AI may soon even be better than humans?

Well, it is happening already. When focusing on image recognition and accuracy, some scientists compared machines with humans. Human’s accuracy is at 97%. But AI’s accuracy has changed dramatically. Eight years ago, machines were 65% accurate. In 2016, machines were equal to humans, both 97%. Today, in 2018, machines are even better than humans. This is why AI is very likely to change our world, positively and negatively. Some positive examples are that AI powered machines can understand many languages, can speak many different accents, are never tired or grumpy and may be cheaper.

In 1997, IBM Watson made the start to a big step in AI. For the first time, a machine won against a human in chess. A programmer programmed all the moves, and the robot didn’t need AI, let alone ML and DL. 19 years later, another exciting game was played. In an even more complex game than chess, the Japanese game ‘Go’, a robot won against world champion Lee Sedol. In the game ‘Go’, however, Google faced a big problem. Go has too many possible moves to programme. So, Google programmers used AI: they programmed the rules and objective of the game and based on that AI won. Later, AlphaGo lost against AlphaGo0. Both robots used AI but AlphaGo0 was even more advanced. AlphaGo0 learnt the rules by observing AlphaGo.

Will AI powered machines replace workers?

How much time could be saved by using AI in the future? McKinsey compared which skills that humans have will be easiest to replace in the future. The skills which would be easy to replace include predictable physical work (building cars is already being replaced) and collecting and processing data (because this is what robots do all the time, such as calculator). On the other hand, the four activities which would not be easily replaced are management, expertise (applying judgement), interface (interacting with people) and unpredictable physical work (e.g. caretakers). The research group discovered that less than 10% of jobs can be fully automated, but more than 50% of work activities can be automated.

What will the future look like?

The following jobs will be in high demand: care providers, educators, managers, professionals and creatives. So, if you were interested in being doctors, teachers, scientists, engineers, programmers or artists, you are less likely to be replaced by robots. AI will also take away jobs, however such as customer interaction and office support. Waiters and IT helpdesks will not be so promising careers anymore (robots will fix robots!).

There are three main reasons why these jobs will be automated: save costs, provide better customer services and offer entirely new skills. The main reason is better services. Saving costs also plays a big role, e.g. for building cars.  And oil and gas islands will be taken over by robots because it is less dangerous for robots, who can go to most places.

In conclusion, AI is already taking over some elements of jobs. As the technology progresses, however, many more jobs may be automated.

The safest jobs are the ones with social skills.

(source: report by Susan Lund from McKinsey: https://www.mckinsey.com/~/media/McKinsey/Global%20Themes/Future%20of%20Organizations/What%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/MGI-Jobs-Lost-Jobs-Gained-Report-December-6-2017.ashx )

Follow @STEAM_WHS on Twitter

How far can fashion trends be considered to be dictated by the social and political climate?

Alice Lavelle (Y13) looks into how fashion taste can be shaped by different trends in social and political thinking.

In this February’s Vogue there was an article written by Ellie Pithers ascribing the sudden popularity of the jagged hemline among both designers and consumers, to the current uncertain political climate, post Brexit and post Trump. Pithers claimed, with support from the Preen designer Thea Bregazzi, that the sudden interest in the more bohemian, asymmetrical hem was a representation of people’s confusion and uncertainty following both Britain leaving the EU and Trump being elected president. Pithers further highlighted how this trend of rollercoaster hemlines can be linked to the fluctuating value of the pound, and more generally the uncertain economic climate, citing the climbing hemlines of the prosperous twenties and sixties, and ankle grazing skirts of the poorer thirties as her evidence. How far this can be considered true, or rather an overzealous journalist reading too far into an otherwise trivial catwalk trend is of course debatable.

However I would argue that this link between fashion and politics is not only accurate in today’s changing social climate, but one that can be seen throughout history – and, when considering this idea, one name immediately springs to mind – Jackie Kennedy. The first lady was a style icon within the United States throughout her husband’s presidency, with the clothes and styles she wore immediately being copied by designers up and down the country. However, what the women of the time who looked to the first lady as means of inspiration were not aware of, was that her beautifully designed gowns and brightly coloured skirt suits were in fact designed in response to the changing US political policies. Following the McCarthyist era of the 1950s, the Unites States was pushing to reinvent itself as progressive, self-believing nation, and Jackie’s traditional, yet simultaneously cosmopolitan ensembles, with a hint of European influence at the hands of Hollywood designer Oleg Cassini, were essentially a well-crafted response to the country’s growing global presence.

Looking further back at iconic moments in the history of fashion it becomes more and more evident that the garments which have shaped the way we dress today were in fact themselves shaped by the political climate they were created within. Take Christian Dior’s ‘New Look’, the long skirted, cinched waisted silhouette that reinvented feminine dress, created in 1947 in response to the more liberal society emerging following the second world war. Or Paco Rabanne’s metal disc dress of 1966 – favouring experimentation over practicality, this design embodied the hopes of the emerging European society.

In terms of designers creating garments as a response to the social climate, you have Rudy Gernreich’s topless dress in the early 70’s, showing the still persistent objectification of the female form, rapidly followed by Bill Gibb’s eclectic, romantic collection in 1972 that paved the way for the ‘hippie movement’ within design, and the debut of Diane Von Furstenberg’s iconic wrap dress in 1973 – a garment that became synonymous with female empowerment within the workplace, a statement of society’s changing attitude towards women. The speed with which these popular styles changed and evolved is just a further representation of how the fashion industry responded to the changes of attitudes towards women in the workplace for example, again showing how intrinsically linked both fashion and political trends are.

And this concept, as explained by Pithers, is relevant today beyond the sudden popularity of rollercoaster hemlines. The spring shows in September all indicated that the previous androgynous styles of autumn/winter were out, and feminine florals and chiffon were back, this time with an edge of female empowerment. Models walked the dior catwalk in white t shirts emblazoned with the slogan ‘We should all be feminists’ taken from the title of an essay written by the Nigerian born Chimamanda Adichie – a bold statement from the newly appointed, first female head of the iconic fashion house, Maria Grazia Chiuri. This surge of feminism across the spring/summer shows again was more than just a trivial fashion trend, it was an embodiment of the rising power of women in the workplace, and within politics – with Hillary at that time still being the potential president of the US.

And it is these trends, the jagged hemlines and cinched waists that eventually get filtered down through the high-street stores and into our wardrobes – meaning the clothes that we wear, either to make a statement or purely because they are comfortable, are essentially just a physical representation of our current uncertainty towards our political climate in a post Brexit post Trump universe.

Follow the WHS DT department on Twitter.

Cross gender casting in Shakespeare’s plays: Does it solve the problem of gender inequality?

Cecelia (Year 12) investigates the modern and historical practice of cross gender casting in Shakespeare’s plays.

From Tamsin Grieg’s Malvolia to Maxine Peake’s Hamlet, cross gendered casting is becoming increasingly popular in British theatre, never more so than in Shakespeare’s plays. New adaptations wanting to put a spin on the 400-year-old productions now look to casting female actresses in the typically male roles of Lear, Macbeth and Othello. Whilst this allows the play to be seen through a different feminine perspective and offers a completely new interpretation of the character, cross gendered casting gives women the opportunities to embody some of theatre’s most complex and popular roles.

However, this seemingly ‘modern’ twist on Shakespeare’s work is not as revolutionary as we may think. When Shakespeare wrote the majority of his work, women were not allowed to perform on stage and so his female characters were always played by young boys or men. As much as gender blind casting can provide a wider range of roles for female actresses, is it always effective and when should the line be drawn?

It is no wonder that Shakespeare’s work is constantly being revisited and adapted, his original text is so complex and diverse that something new can be gleaned from it with every new actor. Hamlet is the most frequently adapted Shakespearian play and has one of the longest histories of women playing the title role. The character of Hamlet is uncertain, passive and lacks resolve – qualities that are typically seen as feminine. Hamlet’s effeminate side has led to the character often being portrayed by women, with some believing that they can inhabit the role with more ease as they are able to fully connect with the feminine side to his personality.

Some of the most famous Victorian Hamlets were women, Sarah Bernhardt and Alice Marriot’s Hamlets were highly regarded by most critics with the part said to have benefitted from their “injection of femininity” (Catherine Belsey). Despite this, some critics argued that it was impossible for an actress to truly comprehend and identify with the thoughts and emotions of a man – a line of argument that is still present today. With this in mind some productions choose to play the character of Hamlet as a woman as demonstrated in Asta Nielsen’s portrayal of Princess Hamlet in the 1920 silent film. Nielsen played Hamlet as a woman masquerading as a man, possessing all the masculine skills and lacking only the instinct to kill. But regardless of the past success of actresses playing the Dane, there is still a public reluctance to accept this change; a 2014 YouGov poll found that 48% of Britons were not happy with the idea of a female Hamlet.

Many argue that by changing the gender of the actor, the gender of the character is effectively altered as well; as such, must the text itself be adjusted and if so, to what extent?

Whilst Vanessa Redgrave played the male role of Prospero, Helen Mirren’s Prospera was a female rewrite of the original. For most of Julie Taymor’s film version of The Tempest, the change to Prospera worked but because her daughter, Miranda, stayed female, the relationship between the magician and the child became complicated. The dynamics between a father and a daughter are vastly different to that of a mother and a daughter, and the Tempest is inherently a complex dissection of the fraught bond between a father and his daughter. The removal of this crucial theme dramatically altered the message of the entire piece and as such did not sit well with many audience members.

Whilst cross gender casting did occur in the 18th and 19th century, it has gained huge popularity in the last 20 years. As gender is beginning to be seen less as a biological definition and more as a social construct, the idea of a woman playing a man or vice versa has become far more acceptable. Our intrinsic understanding of male and female characteristics have changed, along with the ways in which we wish to see them portrayed on stage.

Of course, the opportunity for great female actresses to play great Shakespearian roles is positive. As well as giving women the chance to play classic and multifaceted roles, it allows for directors to create something new out of a play that has been around for hundreds of years.

Despite this, as we move forward, the dramatic community must place more of an emphasis on the creation of original female roles which share the same complexity and breadth of emotion as that of their male counterparts. Juliet Stevenson summarised the debate neatly with her statement on the red carpet that she “want[s] great parts for women, not women playing great parts for men”.

Twitter: @English_WHS

The Effect of Legalised Abortion on Crime

Wimbledon High History

Ava (Year 12), investigates how changes in the right to abortion impacted crime rates in the USA during the late 20th century.

Christmas Day, 1989. Crime is just about at its peak in the United States. Within the fifteen years preceding this day, violent crime has risen by over 80 per cent. All seemed set to continue like this, with crime following the same upward trajectory it had been on for many years. However, in the early 1990s, crime began to fall sharply and dramatically in a totally unexpected way; criminologists, police officials, politicians and economists all failed to predict this sudden fall and could offer no clear explanation for why it had occurred.

Many theories were thrown around, from innovative policing strategies, to a stronger economy, yet none seemed to offer an expansive or conclusive argument. That is, until Donohue and Levitt hypothesised that this fall in crime rates could all be traced back to a winter day in 1973… the day when legalised abortion was suddenly extended to the entirety of the United States.

The US has always had a fraught and complicated history regarding abortion. In the embryonic days of the nation, abortion was permissible until the first movements of the foetus could be felt; in 1828, New York became the first state to restrict abortion, and by 1900 it had been made illegal throughout the country. Through the 60s, several states began to allow abortion under extreme circumstances, such as rape, and by the 70s, five states had made abortion entirely legal and broadly available.

It was not until the 22nd of January 1973 that legalised abortion suddenly rippled through the rest of the country, due to the US’s Supreme Court ruling in Roe v. Wade. This pivotal moment in American legal history in which Justice Blackmun concluded that “the detriment that the State would impose upon the pregnant woman by denying this choice [abortion] is altogether apparent”, would go on to have a monumental impact on crime rates during the 1990s.

Before Roe v. Wade, abortion was expensive and inaccessible, reserved for the daughters of middle-to-upper class families; however, now any women could obtain an abortion, often for less than 100 dollars. The social impacts of this were ground-breaking. Now, a woman who was unmarried, in her teens, or poor (sometimes all three) would be able to take advantage of Roe. V Wade. Women with these socio-economic backgrounds are likely to bring up children who are 50% more likely to live in child poverty, and 60% more likely to grow up with one parent. These two factors combined (childhood poverty and a single-parent household) are among the strongest predictors that a child will have a criminal future. That is not to say that they always predicate criminal behaviour, but rather that within certain circumstances provide very strong indicators for a child who will eventually contribute to rising crime rates.

In the first year after Roe. V Wade, there was one abortion for every 4 live births within the United States, and by 1980, one for every 2.25 births. Herein lies the reason which legalising abortion had a larger effect on lowering crime rates than any other single measure. The women in the United States who were most likely to raise a child who would in the future contribute to crime were now the women most likely to take advantage of new legal measures allowing them the choice to abort.

In the 1990s, just as children born around the time of Roe. V Wade would have been hitting teenage years – the years in which young men are most likely to commit crime – the rate of crime began to fall.

This theory – known as the Donohue-Levitt hypothesis – has provoked strong reactions from many. Firstly, among the politicians who entered heated debates claiming that their new policing strategies were the reason for the crime slump; secondly, among the public, many of whom simply did not believe it. If you are still in need of some convincing, take a look at similar situations in Canada, Australia and Romania, all countries who legalised abortion in some way and saw a drastic fall in crime rates in subsequent years. Better yet, the five states in the US which legalised abortion five years before Roe. V Wade, saw a decrease in crime… five years before the rest.

Steve Sailer and John Lott are two critics who have been vociferous in their rejection of the model. They claim that Donohue and Levitt ignore the indisputable fact that the homicide rate of young males (especially young Black males) temporarily skyrocketed in the late 1980s, young men who were born right around the time of the legalisation of abortion. However, Levitt provides a lengthy retort to this on his blog, which can be found here, if you are so inclined. In it, he comments on the importance of crack cocaine to understanding the fuller picture.

Therefore, whilst it is more comforting to believe that effective governance has had the biggest impact on falling crime rates within the past thirty years, in reality, the granting choice to women in America in fact is the largest reason for the fall in crime rates during the 1990s.

Inspired by “Where Did All the Criminals Go? (Chapter 4), Freakonomics” – Levitt, Donohue

@Freakonomics

Twitter: @DH_Pastoral

Lorca’s Women

Federico García Lorca explored the female soul as no other male writer had done before. His vivid presentation of the effects of oppression and the internalisation of emotion that women endure, in the plays Bodas de Sangre, Yerma and La Casa de Bernarda Alba, is unique and profound. Moreover, Lorca was highly influenced by the period of “modernismo” that was ensuing in Spain during his lifetime. He was, indeed, close friends with Cubist painter Salvador Dalí. Modernist writing reflects less on society and more on individuals, thus it gave Lorca the opportunity to delve deeper into the psychological “state” that is womanhood. Bella Gate (Year 12) summarises her findings to tell us more about Lorca’s work.  

When Lorca first published Bodas de Sangre (Blood Wedding), Yerma and La Casa de Bernarda Alba (The House of Bernarda Alba) as a complete set he called them Duende: Obras Completas. Whilst “Obras Completas” quite simply means “Complete Plays”, “duende” has a myriad of different possible translation. Its literal translation is “goblin” or “elf”, however, in this case Lorca seems to be referring to the “soul” which some of his characters have and quite notably others don’t. The “soul” that Lorca was most interested in exploring was certainly female as one can see in these plays. 

The theory of Canadian poet and critic Janis Rapoport is that these plays should be seen as a complete set with Bodas de Sangre, Yerma and La Casa de Bernarda Alba being seen as a thesis, antithesis and synthesis, respectively. She sees the women in Bodas de Sangre as being like mirrors due to their ability to make the audience reflect on social conventions. Yerma to her is a prism – a self-contained entity that refracts and distorts the qualities of light and image with both internal and external barriers. In La Casa de Bernarda Alba she sees the women as collectively forming a kaleidoscope as they reflect and refract off each other. She goes so far as to say that the house in the play represents the soul of one individual woman.

In Bodas de Sangre women are bound by their social functions. The characters are not endowed with names, thus they lose a sense of their identity. The principle women are the Bride, Mother and Beggar Woman. Perhaps the most interesting woman to analyse is the Bride. The Bride is continually bound by her circumstances. We see women oppressing women in the form of her servant lady attempting to instil morality into her. For the Bride this acts as an imprisoning ideology which hinders her in her pursuit of sexual fulfilment. However, this pursuit results in tragedy due to the societal expectations of virginity before marriage that are put on the Bride. The Mother, Janis Rapoport notes, is an affected character rather than an affecting one. She is greatly affected by the grief that she feels for her husband and son (and eventually sons). She is continually let down by men and her entire identity is defined by this. The Beggar Woman symbolises one of the play’s more profound themes – the mysteries of life and death – conveying that she is somewhat liberated by old age. However, Lorca highlights how all women are bound throughout the generations in different ways. A young woman’s predicament is centred around her sexuality whereas an older woman’s is centred around the lives of her sons. Lorca uses water imagery to portray a contrast between a free and a controlled woman. The control and oppression of women is very much the central theme of the play.

Yerma’s themes are, perhaps, a little more nuanced. There is again the representation of women of all generations, the eldest being the Pagan Crone who has been long repressed by the requirements of honour and strict morality placed upon her. The middle-aged Dolores represents a dichotomy of faith and the supernatural. She prays frequently yet she practises magic in her fertility rituals with Yerma. Then, there is Yerma herself. Yerma quite literally means “barren” – ostensibly referring to her inability to produce a child with her husband Juan. However, this barrenness is also symptomatic of the psychological and emotional (as well as physical) emptiness of womanhood. One may see Yerma’s quest for a child as a yearning for confirmation of feminine identity. However, like the Mother in Bodas de Sangre, she is, bizarrely, indirectly responsible for the death of her own son. By strangling her husband Juan in the end she essentially ruins all chances of having a child. In both Bodas de Sangre and Yerma women’s passionate sexuality, in the case of the bride, or erotic deficiency, in the case of Yerma, lead to tragedy. Thus, Lorca highlights the lack of agency over their sexuality that women had in rural Spain.

Rapoport puts forward the idea that the house in La Casa de Bernarda Alba, with its “thick walls”, embodies the soul of a single woman. Each of the sisters become fragments of a woman’s soul. Adela is the most significant of the sisters perhaps due to her naïveté. She longs for freedom but does not appreciate that it may result in more oppression under the sexual authority of Pepe El Romano – her lover. Bernarda, despite her tyrannical behaviour, is as much a victim of the patriarchy as her daughters, if not more as she has absorbed such oppressive values into her own psyche. The different views and lives of the women reflect off each other in the play.

Fundamentally, Lorca, remarkably whilst being a man himself, strikingly presents life for women in rural Spain and the psychological and philosophical impact of oppression – perhaps because he, himself, was a homosexual who would later be killed under Franco’s fascist regime.

Twitter: @English_WHS, @SpanTweetsWHS

Hearing in colour – Synesthesia and musical composition

What if we heard music and at the same time could see colours? What if we composed music to create colours? Louisa (Year 12) investigates synesthesia and musical composition.

Synesthesia is the neurological condition where the stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in another. There are many different types, however common examples include grapheme-colour synesthesia where letters and numbers are seen as clearly coloured and chromesthesia where different musical keys, notes and timbres elicit specific colours and textures in one’s minds’ eye. For example, some synesthetes may clearly see the musical note F as blue or Wednesday as dark green or the number 6 as tasting of strawberries.

How some synesthetes may experience letters and numbers

Whilst some synesthetic associations are more common than others, it is possible for them to occur between any number of senses or cognitive pathways.

The definitive cause of synesthesia is not yet known, however most neuroscientists agree it is caused by excess interconnectivity between the visual cortex of the brain and the different sensory regions. It is estimated that around 1 in 2000 people experience true synesthesia and it is more common in women than men, however it may be more common as many who have it may not consider it a condition and leave it unreported.

One area in which there is a large concentration of synesthetes is in the arts; notable synasthetes include composers Olivier Messiaen, Franz Liszt and Jean Sibelius, Russian author Vladimir Nabokov, artists Vincent van Gogh and David Hockney, jazz legend Duke Ellington and actress Marilyn Monroe.

Composers who experienced chromesthesia (the type of synesthesia where musical keys and notes and sometimes intervals are associated with colours) often actively incorporated it into their works and in some cases made it central to their compositions.

How musical keys may be seen by people with chromesthesia

French composer Olivier Messiaen (1908-1992) was quoted as saying “I see colours when I hear sounds but I don’t see colours with my eyes. I see colours intellectually, in my head.” He said that if a particular sound complex was repeated an octave higher, the colour he saw persisted, but grew paler. If the octave was lowered the colour darkened. Only if the sound complex was transposed into a different pitch did the colour inside his head radically change.

For Messiaen, it was vital that performers and listeners of his music understood the colours he was portraying in his compositions and he did this by writing instructions in his scores. For example, pianists in the second movement (Vocalise) of his Quartet for the End of Time, written in a prisoner of war camp in 1940, are told to aim for “blue-orange” chords. Similarly, musicians playing ‘Couleurs de la cité céleste’ are instructed to conjure “yellow topaz” for one chord cluster and “bright green” for the next as well as many more examples.

Another composer who actively made use of his synesthesia is Finnish composer Jean Sibelius (1865-1957). Sibelius wrote that “music is for me like a beautiful mosaic which God has put together”. He said if he heard a violin playing a certain piece of music, he would see a corresponding colour such as colour of the sky at sunset in the summer. The colour would be uniquely specific and would only be triggered by a particular sound. This means many of his compositions have strong links to imagery experienced by Sibelius which may account for the strong emotional pulse that can be heard throughout his compositions.

Similarly, Franz Lizst (1811-1886) was known to use his synesthesia in his orchestral compositions, saying “O please, gentlemen, a little bluer, if you please! This tone type requires it!” or “That is a deep violet, please, depend on it! Not so rose!” Initially the orchestra believed Liszt was just joking before realising Lizst did in fact see colours for each tone and key.

It can be difficult to understand the experiences of true synesthetes when not having the condition oneself, however this can be made easier by looking at the works of synesthetic artist Wassily Kandinsky (1866-1944), the first abstract painter. Instead of using his synesthesia to compose new music, he would create artwork based on the music he heard.

Kandinsky discovered his synesthesia at a performance of Wagner’s opera Lohengrin in Moscow. He said “I saw all my colours in spirit, before my eyes. Wild, almost crazy lines were sketched in front of me.” In 1911, after studying and settling in Germany, he was similarly moved by a Schoenberg concert of 3 Klavierstücke Op. 11 and finished painting Impression III (Konzert) two days later.

Impression III (Konzert) – Kandinsky

When studying the music of known synesthetic composers, it’s important we bear in mind what the composers were experiencing when writing it as it adds another dimension to the music and can change the overall interpretation. It also offers a fascinating link between music and art, adding increased complexity to the process of musical composition.

Twitter: @Music_WHS

“Why are German Kindergartens so successful?”

Germany

Sofia Justham Bello, Year 12, tells us more about a recent work experience trip to a Kindergarten in Germany, focusing on the differences in educational practice from her own education.

This blog is based on my work experience in a German Kindergarten in Schwäbisch Hall, Southern Germany, which was arranged by the Goethe Institut. The Goethe Institut promotes the study of German abroad and encourages international cultural exchange, through language lessons, lectures, courses and libraries.

I entered a logo-designing competition in September for the Friends Of The Goethe Institut London, and won, along with 9 other 16 year olds in the UK, a work-shadowing trip to Germany. I worked at a Kindergarten which had children from ages three to six (it involved a lot of singing, going for walks in the forest, and even carpentry!) I really liked how the children there had the freedom to play and the multi-cultural aspect of the Kindergarten was uplifting, given events that are happening in the world today. I also might want to work in education so it was a useful experience.

The system of the German Kindergarten is important to understand why my work experience there was so inspiring. It is commonplace knowledge that the actual word “Kindergarten” in German, literally denotes as a “children’s garden”. Kindergartens were established as a pre-school educational approach based on social interaction through singing, playing and more practical activities such as painting, and arts and crafts.

Arts and crafts: “Basteln” are very important to German Children and integral to German culture; when I worked at the Kindergarten the children were preparing “Laterne”, lanterns, for the traditional festival for children- ‘St. Martin’s Day”; whereon the 11th of November Kindergarten children walk the streets holding their lanterns that they made.

These creative teaching methods ensure that children interact with others and thus transition successfully from home to school.

Historically, such “institutions” for young children originated from Bavaria in Germany and arose in the late 18th Century in order to help families support their children whilst both parents worked. Nonetheless they were not called “Kindergartens” at this point. In fact the term was later coined by Friedrich Fröbel who created a “play and activity” institute in 1837. He renamed his institute Kindergarten in 1840, reflecting his belief that children should be nurtured and nourished “like plants in a garden”.

This idea of children flourishing “like plants in a garden”, and the independence connoted with this image, was evident in the Kindergarten that I worked in in Schwäbisch Hall. On arrival I noticed the immediate differences between my nursery experience, and the “Kindergarten” experience the children were receiving in Germany. The teachers working there were shocked that I began school at the age of four, whereas in Germany, Kindergarten is a process that goes from ages three all the way up to six year olds. The site also had a “Kinderkrippe” upstairs, which is a crèche, so essentially up to six years of your life could take place there, which is a huge part of your childhood. Hence the responsibility the teachers have to shape their childhood is huge.

The teaching approach there encourages the young children to think and act independently. Moreover there is a huge focus on nature, and everyday the children would go on a walk and thus connect with nature. The first day was “Waldtag”, day of the forest, which is a national scheme run by the government to encourage children to explore German forests. We spent a long day walking, running, and feeding animals, like goats and sheep. In the afternoon we stopped to have a break and the children were able to play. One child approached me repeatedly saying the word “Säge” which means a “Saw”; I thought that this had got lost in translation, but to my surprise the children began to saw at the forest ground, constructing small houses out of branches, with minimal supervision from the teachers.

It was evident, from just one such example that the children there have more freedom to play, no pressure to read or write (which naturally comes later on) and thus their childhood is extended and their collaborative skills are improved. The older children took care of the younger ones, and overall it was an extremely inspiring experience

Here is a link to the Goethe Institut Website: https://www.goethe.de/en/index.html

@German_WHS

Crispr – How new gene editing technology will affect you.

Emma Ferraris in Year 12 gives us an insight into the new gene editing technology of the past 30 years and the potential it has for changing the world of medicine and how we view our species.

In the past 30 years editing of the human genome has evolved and improved leaps and bounds, most notably with the discovery of the new technology Crispr, which, since 2012 has been available and used by scientists to manipulate the genome.

Crispr in itself is a short section of repeated DNA found in the genomes of bacteria and other microorganisms, but when coupled with an enzyme such as Cas-9, the technology enables geneticists to edit part of the human genome by cutting sections at a specific place and removing or adding new strings of DNA. This uses RNA which acts as a marker to ensure the Cas-9 enzyme edits the sequence in the right place.

gene

As a result this technology is the most precise and versatile method of manipulating genetics to date and moreover can be purchased for around $60 – far cheaper than any other known method for DNA splicing. Hence unsurprisingly there has been much buzz and scruple around the new technology in the scientific world.

So, what will this gene editing mean for the future of medicine? And how will this affect you?

There is a whole host of possible uses for this new technology, including its capacity for combatting diseases, viruses and mutations in humans, as well as its ability to edit the genome of specific cells in the body.

On a small scale Crispr technology has been used successfully to edit the HIV virus out of the cells of rats. In 2016, Kamel Khalili, director of the Comprehensive NeuroAIDS Center at Temple University, managed to edit out around 50% of the HIV virus that was present in 99% of the rats cells. This success rate seems very hopeful for the future of the removal of this virus from both animals and humans, indeed Khalili himself commented that “CRISPR may be more convenient for gene editing than the prior gene editing tools used.” However, this is only one step, albeit an important one, in the process of using Crispr to actually edit out the virus from a human patient’s cells.

On a larger scale Crispr has, in the past year, been used in immunotherapy to treat certain cancers. Using Crispr, Michel Sadelain, of Memorial Sloan Kettering Cancer Center, was able to remove T-cells (a certain type of white blood cell which play a part in the immune system) from the blood and edit them using Crispr so that they were better able to recognise the antigens (individualised markers) on cancerous cells. As a result the T-cells could locate and destroy the tumour much more easily and with greater effect.

Similarly in Pennsylvania this month, scientists began an experiment in not only making the T-cells better able to locate the mutating cells, but also edit out two of the genes in the immune cell that mean they are better able to actually attack the tumour. This ex vivo (out of glass) therapy is also a lot safer than injection of Crispr straight into the blood as this sometimes causes a negative immune reaction.

Possibly most like a science-fiction plot, Crispr also has the capacity in the future to edit human genes and indeed the DNA in reproductive cells so that a new breed of eugenics could be on the horizon.

James J. Lee, a researcher at University of Minnesota said “In my opinion, CRISPR could in principle be used to boost the expected intelligence of an embryo by a considerable amount.” This theory is both exciting but has also unsurprisingly sparked many bioethnic debates in recent years, especially after a study in China 2015 used Crispr to edit a human embryo.

Whether or not you agree with the ethics, the prospects of designer babies and the perfect genetically engineered human soldier, which were once merely fictional, suddenly seem like a possible reality if the editing of human embryos continues to improve.

Without a doubt there are many benefits of this technology which in the next decades will become increasingly used by the biomedical field in treatments for diseases and viruses in humans and animals. The potentially unethical and dehumanising effects of DNA editing are much more obscure, so it is the future generation’s responsibility to ensure the use of Crispr remains purely in the interest of scientific improvement.

Follow the WHS Biology department on Twitter: @Biology_WHS