Does honesty have a place in the law?

Lilly (Y11), discusses the importance of honesty in law and whether the practice of Law and the Justice System needs honesty to function.

When considering this question, it is tempting to immediately think of the skilful, yet exploitative nature in which lawyers can bend any law or lunge into any loophole which will win their case, thus concluding that honesty most definitely does not fit into law. This narrative has been one that is heard in stories stretching back centuries; if we stretch back three, to the words of 18th century poet John Gay, he describes this as follows:

I know you lawyers can with ease,

Twist words and meanings as you please;

That language, by your skill made pliant,

Will bend to favour every client;

That ’tis the fee directs the sense,

To make out either side’s pretence.

Now, it is incorrect to say that there is no truth in this. If we think about why lawyers are hired in the first place, it is obviously to find ways in which the law can be used to favour their client. It is doubtful that many people would hire a lawyer and tell them to find whatever they believed to be the most honest plan of action to take, rather than what will let them walk free. In other words, a lawyer’s job in essence is to manipulate the legal terms and conditions in order to present a client who seems like an innocent person in front of a jury, but perhaps looks more like a thief when they close the doors of the court behind them.

 

 

The law is a set of codes, and the program that the codes feed in to allows for our society to be regulated and run smoothly.

 

 

To really understand this question, we have to strip the very concept of law back to its basics. The Oxford Dictionary recognises law as “the system of rules which a particular country or community recognises as regulating the actions of its members and which it may enforce by the imposition of penalties.” In essence, the law is a set of codes, and the program that the codes feed in to allows for our society to be regulated and run smoothly. This view of society as fairly binary, (right or wrong), does not allow any scope for honesty, therefore, it could be said that the law is much like a computer program with lawyers being the coders, operating the whole show.

However, with the risk of sounding like we live in some sort of dystopian novel controlled by an IT department of lawyers, lets look at why this idea is flawed. This stripped-down view of law to its basic components is very different to our unfortunately much less simple reality, where honesty must have a purpose in law. If you cut right to the heart of law, you see that honesty is in fact an integral part of its composition. It’s no mistake that one swears to tell “the truth, the whole truth, and nothing but the truth” at a trial. This reflects not just the presence, but the necessity of truth in law.

Yet even so, a necessity for honesty doesn’t necessarily translate to there being a guarantee of honesty. Every single human is dishonest at some point (if not many points) during their lives. On the whole, this dishonesty manifests itself in the form of white lies, which are, on the whole, harmless. It is only for a smaller portion of society that these present themselves as serious crimes. But, this is the important part. The very system of law only works when assuming that human beings are honest most of the time. If we were the opposite, i.e. dishonest most of the time, the system of law as we know it would not be able to function, and instead we would probably be living under a system where the means of control were strictly military. But conversely, if human beings were intrinsically honest, this would mean that we wouldn’t need law at all, and would probably live more in something similar to the computer analogy mentioned earlier. The assumption that we are honest most of the time means that the law functions by the system of proving someone to be dishonest, reflected in the well-known phrase “innocent until proven guilty”. Can you imagine if humans were recognised to be dishonest most of the time by the law? This would not only blur the lines as to if a law that was made was good or bad, but also make it incredibly hard to distinguish between a law being created to prosecute or persecute (which would probably result in martial law).

 

If human beings were intrinsically honest, this would mean that we wouldn’t need law at all.

 

In short, there isn’t a straightforward answer to this question. Many ideas have been suggested in disagreement with honesty’s place in law, due to the self-interested, survivalist nature of humans, which determines that even if honesty did have a place in law, this honesty is easily undermined by the fact that humans are driven by self-preservation, which usually doesn’t coincide with being completely honest. But even so, it is undebatable that the level in which we regard honesty’s significance in law has a huge influence on our society.

 

Castles: architecture and story.

Wimbledon High History

Daisy (Y12) explores the significance of the castles that are dotted all over the British Isles, arguing that we should look beyond their architectural genius to study the stories behind them.

Castles are undoubtedly the most important architectural legacy of the Middle Ages. In terms of scale and sheer number, they undermine every other form of ancient monument and dominate the British landscape. What’s more, the public has an enduring love affair with these great buildings since they play an intrinsic role in our heritage and culture, meaning that over 50 million people pay a visit to a British castle each year.

Arundel Castle, West Sussex

The period between the Normans landing at Pevensey in 1066 and that famous day in 1485 when Richard III lost his horse and his head at Bosworth (consequently ushering the Tudors and the Early Modern period in to England), marks a rare flowering of British construction. Whilst the idea of “fitness for purpose” was an important aspect of medieval architecture, the great castles of the era demonstrate that buildings had both practical and symbolic use.

One of the most iconic forms of medieval castle, was that of the Motte and Bailey which ultimately came hand in hand with the Norman conquest of 1066. The Bayeux Tapestry eloquently depicts the Norman adventurers landing in Pevensey and hurrying to occupy nearby Hastings. The men paused on the Sussex coast and are said to have indulged in an elaborate meal whereby they discussed their plans for occupying the highly contested country. The caption of the Tapestry notes “This man orders a castle to be dug at Hastings”, which is followed by a rich scene illustrating of a group of men clutching picks and shovels heading to start their mission- castles were the Normans first port of call. In the years that followed, castle-building was an intense and penetrating campaign, with one Anglo-Saxon Chronicler stating that the Normans “built castles far and wide throughout the land, oppressing the unhappy people, and things went from ever bad to worse”. It was vital for William to establish royal authority over his newly conquered lands, and thus the country saw the mass erection of potentially more than 1,000 Motte and Bailey castles. With these buildings, speed was of the essence, and so the base, or Bailey, was made of wood, while the Keep (which sat on the top of the Motte) was relatively small and made of stone.

A drawing displaying a Motte and Bailey Castle.

Despite the fact that these castles obviously served a defensive purpose, with the elevation from the Motte providing the Norman noble with a panoramic view of the surrounding countryside, their symbolic purpose was arguably of greater significance. The erection of such a large number of castles set out to alter the geopolitical landscape of the country for ever and made sure that the Norman presence would be felt and respected. Figuratively, the raised Keep acted as a physical manifestation of the Norman’s dominance over the Anglo-Saxons with the upwards gradient effortlessly representing the authoritative function and higher superiority of the Lord, which was the fundamental aspect of the feudal system. Additionally, as many academics tend to emphasise, the fact that they were often located in order to command road and river routes for defensive purposes meant that their possessors were also well placed to control trade, and thus could both exploit and protect mercantile traffic.

Another key development in castle building occurred approximately 200 years after the Battle of Hastings, during the reign of Henry II. Prior to Henry’s accession, England had been burdened by civil war and a period known as the “anarchy” under his predecessor, King Stephen. Taking advantage of the confusion and lawlessness displayed throughout Stephen’s reign, the barons had become fiercely independent. They had not only issued their own coinage but also built a significant number of adulterine castles and had unlawfully adopted a large sum of the royal demesne. As a result, in order to establish royal authority, Henry set about demolishing these illegal castles en-masse, on which he expended some £21,500, and further highlighted his supremacy. Thus, as seen under the Normans, castles were again used as a means of establishing royal authority. From here the British landscape was significantly altered for a second time, which, in a period lacking efficient communication and technologies would have been a highly visible emblematic change impacting all members of society from the richest of the gentry to the everyday medieval peasant.

As a result, whilst is it important to appreciate the architectural styles and physical construction of medieval castles, I believe it is vital to acknowledge their symbolic nature, and appreciate how through the introduction of such fortresses, peoples’ lives would change forever.

Follow @History_WHS on Twitter.

Should standardised exams be exchanged for another form of assessment?

wimbledon logo

Jasmine (Year 11) explores the merits and weaknesses of exams as the formal assessment of intelligence, discussing whether an alternative should be introduced that suits all students.

Exams – the bane of existence for some but an excellent opportunity to excel for others. Thought to have been founded in China, with the use of the standardised “imperial exam” in 605 AD, they are the education system’s way of assessing the mental ability and knowledge of students whilst also creating a practical method of comparison to others in the country. They are therefore an important factor and indicator for employers. But does this strict, tight method really work for assessing intelligence or is it just a memory game that is only achievable for a select few?

I asked 80 students in a survey if they think that exams should be exchanged for another form of assessment and the results concluded that 78% agree that they should. However, when asked about their reasoning, it was mostly due to stereotypical dislike for the stressful period. Some who agreed with the statement also mentioned the unrealistic exam conditions that would not occur in daily life. An example was set forth that during a language oral exam a great amount of pressure is put on the students causing them to become nervous and not perform to their best ability. However, in a real-life conversational situation they would not have to recite pre-prepared answers and the pressure would be taken off so the conversation would flow more naturally. This shows that although someone may have real fluency and talent for the language, their expertise will not be notified and rewarded accordingly

Among many students, examinations are accused of being memory tests that only suit a certain learning style; and the slow abolishment of coursework at GCSE level is contributing to this. This could be shown by the fact that many people in the country have learning difficulties such as dyslexia. These students may be particularly bright and diligent workers however, their brains do not function in the way exams rely on them to. Nonetheless, if they are put in front of a practical task that they have learned to do through experience, they are deemed to be far more knowledgeable and perceptive. Studies show that by learning something consistently for a long period of time it stays in our memory but though it is important to ingrain essential facts into our brains, especially at GCSE level, GCSEs are mostly comprised of learning facts over a period of around 2-3 years and then a final exam at the end; which does not particularly show consistent learning and is more just an overflow of information.

Stress levels caused by the lead-up, doing, and waiting period for results that subsequently follows are also a major factor in the argument that traditional standardised tests should be augmented. According to the NSPCC, from 2015-2016 there was a 21% increase in the likelihood of counselling sessions being for 15-18 year olds affected by exam stress many of whom would be doing GCSEs and A Levels. Some say that the stress these tests cause is necessary for success and mimics the stresses of the real world; but how essential are some of these exams like non-calculator Maths papers when nowadays most people of have calculators on their phones? Exams are also said to create healthy competition that prepares people for the struggles and competitive nature of the modern working world and also motivates students, but can’t this be done with another form of assessment that is more suited to the individual student?

However, the use of different approaches to examination may, in fact, lead to the risk of the test being corrupted. This would mean that grading would be mainly subjective and there would be more scope for unfair advantage for some rather than others. The restrictive nature of our exams today with a set time, set paper and set rules does ensure that fairness is a priority but is the actual exam really the most equal way to test so many different students?

Standardised exams are not the best way of determining the knowledge and intelligence of students around the world. This is due to the stress and pressure they cause, the fact that they are only appropriate for certain learning styles and their ill comparison to real life events in the working world. Changing the form of these assessments may, however, cause grades to be unreliable. My suggestion would be smaller and more practical examinations throughout the course that all contribute to the final grade as this puts less pressure on the students and helps those who rely on different learning strategies to excel and demonstrate their full potential.

Hotspotting: the conservation strategy to save our wildlife?

Globe

Alex (Year 11) investigates whether the strategy of hotspot conservation is beneficial to reducing mass extinction rates, or if this strategy is not all it claims to be.

Back in 2007, Professor Norman Myers was named the Time Magazine Hero of the Environment for his work in conservation with relation to biodiversity hotspots. He first came up with his concept of hotspot conservation in 1988, when he expressed his fears that ‘the number of species threatened with extinction far outstrips available conservation resources’. The main idea was that he would identify hotspots for biodiversity around the world, concentrating conservation efforts there and saving the most species possible in this way.

Myers’ fears are even more relevant now than 30 years ago. According to scientific estimates, dozens of species are becoming extinct daily leading to the worst epidemic of extinction since the death of the dinosaurs, 65 million years ago. And this is not as naturally occurring as a giant meteor colliding with the Earth – 99% of the IUCN Red List of Threatened Species are at risk from human activities such as ocean pollution and loss of habitat due to deforestation amongst other things. It is therefore crucial that we act now to adopt a range of conservation strategies to give our ecosystems a chance at survival for future generations.

To become accepted as a hotspot, a region must meet two criteria: firstly it must contain a minimum of 1,500 endemic (native or restricted to a certain area) plant species, and secondly it must have lost at least 70% of its original vegetation. Following these rules, 35 areas around the world ranging from the Tropical Andes in South America to more than 7,100 islands in the Philippines and all of New Zealand and Madagascar, were identified as hotspots. These areas cover only 2.3% of Earth’s total land surface but contain more than 50% of the world’s endemic plant species and 43% of endemic terrestrial bird, mammal, reptile and amphibian species, making them crucial to the world’s biodiversity.

This concept has been hailed as a work of genius by conservationists and has consequently been adopted by many conservation agencies such as Conservation International – who believe that success in conserving these areas and their endemic species will have ‘an enormous impact in securing our global biodiversity’.

The principal barrier to all conservation efforts is funding, as buying territories and caring for them costs a lot of money, which is primarily raised from businesses, governments and individual donors. Most of this funding is raised through campaigns focused on charismatic megafauna such as the penguin or the snow leopard. These types of campaigns motivate people as they feel a closer connection to these animals and they seem to really be making a difference in conserving these species. When conservation is done on a larger, regional level, there is less of the gratification that comes along with donating money as there is less control, felt by the donors, over the work done for conservation. Through the identification of 35 specific areas to concentrate funds towards, this reconnects the public, as well as larger companies and local governmental bodies, to the projects, thereby encouraging more donations. It is for this reason that hotspot conservation has received £740 million, the largest amount ever assigned to a single conservation strategy.

Although the 35 areas identified are relatively widespread and well-funded for their conservation efforts, this strategy has been criticised for its neglect of other crucial ecosystems. First of all, there are no hotspots in northern Europe and many other areas around the world, neglecting many species of both flora and fauna. Also, as the criteria for classification as a hotspot are with reference to endemic plant species, many species of fauna are neglected, from insects to large and endangered species such as elephants, rhinos, bears, and wolves. Furthermore, areas referred to as ‘coldspots’ are ignored. This could lead to the collapse of entire ecosystems following the extinction of key species.

Another major issue with this strategy is that terrestrial environments only make up around 29.2% of the earth’s surface area. The other 70.8% is covered in very diverse (but also very threatened) oceans and seas. Marine environments are overlooked by hotspot conservationists as they rarely have 1500 endemic plant species, as deep oceans with very little light are not the ideal environmental environment for plant growth, and species floating on the top are rarely confined to one specific area, making them not endemic.

So, if even the more successful strategies for conservation are so flawed, is there any hope for the future? I think that yes, there is. Although there is no way to save all the species on earth, identifying crucially important areas to concentrate our efforts on is essential to modern conservation efforts. Hotspot conservation is definitely improving the ecological situation in these 35 areas and so those efforts should be continued, but that doesn’t mean that all conservation efforts should be focussed only on these hotspots. Hotspot conservation should be part of the overall strategy for reduction of mass extinction rates, but it is not the fix-all solution that some claim it is.

Follow @Geography_WHS & @EnviroRep_WHS on Twitter.

Can we hope for junk-free Space?

Leslie in Year 11 discusses the increasing threat of junk in space orbit and therefore the significance of and urgency in removal of such junk, and whether a new experiment, led by the Surrey Space Centre, will provide a potential solution to the crowded orbit.

Since the turn of the 20th century, the rising interest in outer space has resulted in an uncountable amount of space debris. This under-reported phenomenon, also known as space junk or space waste, is the cluttering of the universe with man-made objects, and it has potentially dangerous consequences. But why should it capture people’s attention globally?

Hundreds and thousands of unused satellites from all over the world and fragments of spacecraft (including rocket stages and paint flakes) are in the same orbit, together with the functioning spacecraft. This is because many pieces of unwanted space debris take a long time, even decades, to deorbit and fall back into earth. Clearly, due to rising global interest in space exploration, the chances of collision are growing ever greater.

A report from the U.S. National Research Council in 2011 warned NASA that the ‘amount of orbiting space debris was at a critical level…enough currently in orbit to continually collide and create even more debris, raising the risk of spacecraft failures’. More than half a decade has passed since, and the removal of space debris definitely seems urgent.

A key solution to this issue is the removal of space waste from the atmosphere; this is important as even tiny particles of less than 1cm can have dramatic effects due to the high speed at which they travel and the risk of collisions. Perhaps surprisingly, these particles are a major threat to space walking astronauts and humans aboard spacecraft. Whilst it is important to acknowledge that collisions are unlikely due to space being unimaginably huge, the possible consequences could be dramatic, rendering it absolutely essential to diminish the growing threat posed by space debris.

To demonstrate this point, less than two years ago Sentinel-1A suffered an impact, where an object slammed into one of the solar panels and caused a dent of nearly half meter across. Had the main spacecraft been hit, it would have resulted in serious damage. Holger Krag, Head of ESA’s Space Debris Office at ESOC (European Space Operations Centre), stated, ‘We appear to have survived this unexpected collision with minimal impact on this particular satellite. We may not be so fortuitous next time.’

The leading astrophysics agencies’ announcements have emphasized the critical quantities of space debris and although space travel has always had risks, the rising amounts of space junk puts existing spacecraft under a continuous threat, especially as millions of small particles are untraceable. Encouraging further experiments focusing on the removal of them is necessary, as it is urgently important to come up with a solution and this is putting many space agencies under pressure to find the best solution to this ongoing problem.

The solution may be closer to home than we think! Not too far away from Wimbledon, the ongoing mission RemoveDebris at Surrey Space Centre aims to capture and destroy space debris in low cost initiatives, which will hopefully reduce the risk of future collisions. The experiment, planned to be launched this year, consists of four ways to capture space debris. If these methods turn out to be successful, it will be a step towards a safer orbit for the future. It includes: a net experiment, a VBN (Vision based navigation) experiment, a harpoon and deployable target experiment and a DragSail. The RemoveDebris will carry its own junk and measure the success of their methods in space.

The initial experiment involves capturing the debris by firing a net. When the CubeSat (which is released by RemoveDebris to try to capture the objects), is at a distance of 7m, the net will fire and hit the target. The large surface area enables the CubeSat to deorbit at an accelerated rate, which will hopefully remove the debris from space.

Airbus, an international aerospace company, is involved in a harpoon target experiment and many scientists believe that this could in fact provide the solution to space junk. In the RemoveDebris experiment, a small miniature harpoon is planned to be on board. A DragSail, also on board, is to quicken the de-orbit of the satellite when deployed and to speed up the rate of burning in the Earth’s atmosphere, explained by Surrey Space Centre.

The success of this experiment in removing space debris will lessen the risk of collision. It will create a safer environment for functioning satellites and any space vehicles, especially those with humans aboard. This is an absolutely necessary precaution to take before taking further steps in space exploration, and the success of this experiment will provide a new, innovative way to increase safety in outer space.

Despite this experiment providing hope for a better solution to the problem of space debris, how long it will take to make the orbit safe again is questionable and yet to be answered. Nevertheless, the many experiments being undertaken to help tackle this pressing problem provide some consolation. Although it seems like we are extremely far away from junk-free space, it might not be an impossibility.

Follow @Physics_at_WHS on Twitter.

Artificial Intelligence and the future of work

By Isabelle Zeidler, Year 7.

What is AI, and how will it change our future?

Firstly, so that AI works, there are three key requirements: data, hardware and algorithms. An example of data are the words in a dictionary saved on a computer. You need this because otherwise Google Translate won’t work. Hardware is necessary so that the computer is able to store data. Lastly, algorithms are what many of us know as programming; the function so that we can do something with our data.

The history of AI is longer than we imagine; we have used AI since 1950. Machine Learning (ML) is a kind of AI. We have used ML since 1980. The most modern kind of ML, AI is Deep Learning (DL). Many of us do not know about this, but a lot of us know the companies that use it. One of the most advanced companies in DL are Google and IBM Watson. So why is DL so amazing? ML has some kind of coding of rules given by programmers. DL learns these rules by observation. This is similar to what happens when babies learn to speak – they rely on observing others.

There are four amazing skills which AI can do:

  • computer vision
  • natural language processing
  • complex independent navigation
  • machine learning

Not all AI use all of these abilities. Some examples of computer vision would include the new passport control at the airport. Another example which is very popular is face recognition in an iPhone X or Surface Pro. The second skill is natural language processing. This is the ability to understand language. A relevant example is Alexa. In the future, some call centres will also use AI’s ability to understand language (it has already started). For example, when you call a bank, a robot will be able to answer even complex inquiries, not just tell you the account balance. Complex independent navigation examples are modern technology ideas like drones and planes.

Do you think that AI may soon even be better than humans?

Well, it is happening already. When focusing on image recognition and accuracy, some scientists compared machines with humans. Human’s accuracy is at 97%. But AI’s accuracy has changed dramatically. Eight years ago, machines were 65% accurate. In 2016, machines were equal to humans, both 97%. Today, in 2018, machines are even better than humans. This is why AI is very likely to change our world, positively and negatively. Some positive examples are that AI powered machines can understand many languages, can speak many different accents, are never tired or grumpy and may be cheaper.

In 1997, IBM Watson made the start to a big step in AI. For the first time, a machine won against a human in chess. A programmer programmed all the moves, and the robot didn’t need AI, let alone ML and DL. 19 years later, another exciting game was played. In an even more complex game than chess, the Japanese game ‘Go’, a robot won against world champion Lee Sedol. In the game ‘Go’, however, Google faced a big problem. Go has too many possible moves to programme. So, Google programmers used AI: they programmed the rules and objective of the game and based on that AI won. Later, AlphaGo lost against AlphaGo0. Both robots used AI but AlphaGo0 was even more advanced. AlphaGo0 learnt the rules by observing AlphaGo.

Will AI powered machines replace workers?

How much time could be saved by using AI in the future? McKinsey compared which skills that humans have will be easiest to replace in the future. The skills which would be easy to replace include predictable physical work (building cars is already being replaced) and collecting and processing data (because this is what robots do all the time, such as calculator). On the other hand, the four activities which would not be easily replaced are management, expertise (applying judgement), interface (interacting with people) and unpredictable physical work (e.g. caretakers). The research group discovered that less than 10% of jobs can be fully automated, but more than 50% of work activities can be automated.

What will the future look like?

The following jobs will be in high demand: care providers, educators, managers, professionals and creatives. So, if you were interested in being doctors, teachers, scientists, engineers, programmers or artists, you are less likely to be replaced by robots. AI will also take away jobs, however such as customer interaction and office support. Waiters and IT helpdesks will not be so promising careers anymore (robots will fix robots!).

There are three main reasons why these jobs will be automated: save costs, provide better customer services and offer entirely new skills. The main reason is better services. Saving costs also plays a big role, e.g. for building cars.  And oil and gas islands will be taken over by robots because it is less dangerous for robots, who can go to most places.

In conclusion, AI is already taking over some elements of jobs. As the technology progresses, however, many more jobs may be automated.

The safest jobs are the ones with social skills.

(source: report by Susan Lund from McKinsey: https://www.mckinsey.com/~/media/McKinsey/Global%20Themes/Future%20of%20Organizations/What%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/MGI-Jobs-Lost-Jobs-Gained-Report-December-6-2017.ashx )

Follow @STEAM_WHS on Twitter

The Effect of Legalised Abortion on Crime

Wimbledon High History

Ava (Year 12), investigates how changes in the right to abortion impacted crime rates in the USA during the late 20th century.

Christmas Day, 1989. Crime is just about at its peak in the United States. Within the fifteen years preceding this day, violent crime has risen by over 80 per cent. All seemed set to continue like this, with crime following the same upward trajectory it had been on for many years. However, in the early 1990s, crime began to fall sharply and dramatically in a totally unexpected way; criminologists, police officials, politicians and economists all failed to predict this sudden fall and could offer no clear explanation for why it had occurred.

Many theories were thrown around, from innovative policing strategies, to a stronger economy, yet none seemed to offer an expansive or conclusive argument. That is, until Donohue and Levitt hypothesised that this fall in crime rates could all be traced back to a winter day in 1973… the day when legalised abortion was suddenly extended to the entirety of the United States.

The US has always had a fraught and complicated history regarding abortion. In the embryonic days of the nation, abortion was permissible until the first movements of the foetus could be felt; in 1828, New York became the first state to restrict abortion, and by 1900 it had been made illegal throughout the country. Through the 60s, several states began to allow abortion under extreme circumstances, such as rape, and by the 70s, five states had made abortion entirely legal and broadly available.

It was not until the 22nd of January 1973 that legalised abortion suddenly rippled through the rest of the country, due to the US’s Supreme Court ruling in Roe v. Wade. This pivotal moment in American legal history in which Justice Blackmun concluded that “the detriment that the State would impose upon the pregnant woman by denying this choice [abortion] is altogether apparent”, would go on to have a monumental impact on crime rates during the 1990s.

Before Roe v. Wade, abortion was expensive and inaccessible, reserved for the daughters of middle-to-upper class families; however, now any women could obtain an abortion, often for less than 100 dollars. The social impacts of this were ground-breaking. Now, a woman who was unmarried, in her teens, or poor (sometimes all three) would be able to take advantage of Roe. V Wade. Women with these socio-economic backgrounds are likely to bring up children who are 50% more likely to live in child poverty, and 60% more likely to grow up with one parent. These two factors combined (childhood poverty and a single-parent household) are among the strongest predictors that a child will have a criminal future. That is not to say that they always predicate criminal behaviour, but rather that within certain circumstances provide very strong indicators for a child who will eventually contribute to rising crime rates.

In the first year after Roe. V Wade, there was one abortion for every 4 live births within the United States, and by 1980, one for every 2.25 births. Herein lies the reason which legalising abortion had a larger effect on lowering crime rates than any other single measure. The women in the United States who were most likely to raise a child who would in the future contribute to crime were now the women most likely to take advantage of new legal measures allowing them the choice to abort.

In the 1990s, just as children born around the time of Roe. V Wade would have been hitting teenage years – the years in which young men are most likely to commit crime – the rate of crime began to fall.

This theory – known as the Donohue-Levitt hypothesis – has provoked strong reactions from many. Firstly, among the politicians who entered heated debates claiming that their new policing strategies were the reason for the crime slump; secondly, among the public, many of whom simply did not believe it. If you are still in need of some convincing, take a look at similar situations in Canada, Australia and Romania, all countries who legalised abortion in some way and saw a drastic fall in crime rates in subsequent years. Better yet, the five states in the US which legalised abortion five years before Roe. V Wade, saw a decrease in crime… five years before the rest.

Steve Sailer and John Lott are two critics who have been vociferous in their rejection of the model. They claim that Donohue and Levitt ignore the indisputable fact that the homicide rate of young males (especially young Black males) temporarily skyrocketed in the late 1980s, young men who were born right around the time of the legalisation of abortion. However, Levitt provides a lengthy retort to this on his blog, which can be found here, if you are so inclined. In it, he comments on the importance of crack cocaine to understanding the fuller picture.

Therefore, whilst it is more comforting to believe that effective governance has had the biggest impact on falling crime rates within the past thirty years, in reality, the granting choice to women in America in fact is the largest reason for the fall in crime rates during the 1990s.

Inspired by “Where Did All the Criminals Go? (Chapter 4), Freakonomics” – Levitt, Donohue

@Freakonomics

Twitter: @DH_Pastoral

Lorca’s Women

Federico García Lorca explored the female soul as no other male writer had done before. His vivid presentation of the effects of oppression and the internalisation of emotion that women endure, in the plays Bodas de Sangre, Yerma and La Casa de Bernarda Alba, is unique and profound. Moreover, Lorca was highly influenced by the period of “modernismo” that was ensuing in Spain during his lifetime. He was, indeed, close friends with Cubist painter Salvador Dalí. Modernist writing reflects less on society and more on individuals, thus it gave Lorca the opportunity to delve deeper into the psychological “state” that is womanhood. Bella Gate (Year 12) summarises her findings to tell us more about Lorca’s work.  

When Lorca first published Bodas de Sangre (Blood Wedding), Yerma and La Casa de Bernarda Alba (The House of Bernarda Alba) as a complete set he called them Duende: Obras Completas. Whilst “Obras Completas” quite simply means “Complete Plays”, “duende” has a myriad of different possible translation. Its literal translation is “goblin” or “elf”, however, in this case Lorca seems to be referring to the “soul” which some of his characters have and quite notably others don’t. The “soul” that Lorca was most interested in exploring was certainly female as one can see in these plays. 

The theory of Canadian poet and critic Janis Rapoport is that these plays should be seen as a complete set with Bodas de Sangre, Yerma and La Casa de Bernarda Alba being seen as a thesis, antithesis and synthesis, respectively. She sees the women in Bodas de Sangre as being like mirrors due to their ability to make the audience reflect on social conventions. Yerma to her is a prism – a self-contained entity that refracts and distorts the qualities of light and image with both internal and external barriers. In La Casa de Bernarda Alba she sees the women as collectively forming a kaleidoscope as they reflect and refract off each other. She goes so far as to say that the house in the play represents the soul of one individual woman.

In Bodas de Sangre women are bound by their social functions. The characters are not endowed with names, thus they lose a sense of their identity. The principle women are the Bride, Mother and Beggar Woman. Perhaps the most interesting woman to analyse is the Bride. The Bride is continually bound by her circumstances. We see women oppressing women in the form of her servant lady attempting to instil morality into her. For the Bride this acts as an imprisoning ideology which hinders her in her pursuit of sexual fulfilment. However, this pursuit results in tragedy due to the societal expectations of virginity before marriage that are put on the Bride. The Mother, Janis Rapoport notes, is an affected character rather than an affecting one. She is greatly affected by the grief that she feels for her husband and son (and eventually sons). She is continually let down by men and her entire identity is defined by this. The Beggar Woman symbolises one of the play’s more profound themes – the mysteries of life and death – conveying that she is somewhat liberated by old age. However, Lorca highlights how all women are bound throughout the generations in different ways. A young woman’s predicament is centred around her sexuality whereas an older woman’s is centred around the lives of her sons. Lorca uses water imagery to portray a contrast between a free and a controlled woman. The control and oppression of women is very much the central theme of the play.

Yerma’s themes are, perhaps, a little more nuanced. There is again the representation of women of all generations, the eldest being the Pagan Crone who has been long repressed by the requirements of honour and strict morality placed upon her. The middle-aged Dolores represents a dichotomy of faith and the supernatural. She prays frequently yet she practises magic in her fertility rituals with Yerma. Then, there is Yerma herself. Yerma quite literally means “barren” – ostensibly referring to her inability to produce a child with her husband Juan. However, this barrenness is also symptomatic of the psychological and emotional (as well as physical) emptiness of womanhood. One may see Yerma’s quest for a child as a yearning for confirmation of feminine identity. However, like the Mother in Bodas de Sangre, she is, bizarrely, indirectly responsible for the death of her own son. By strangling her husband Juan in the end she essentially ruins all chances of having a child. In both Bodas de Sangre and Yerma women’s passionate sexuality, in the case of the bride, or erotic deficiency, in the case of Yerma, lead to tragedy. Thus, Lorca highlights the lack of agency over their sexuality that women had in rural Spain.

Rapoport puts forward the idea that the house in La Casa de Bernarda Alba, with its “thick walls”, embodies the soul of a single woman. Each of the sisters become fragments of a woman’s soul. Adela is the most significant of the sisters perhaps due to her naïveté. She longs for freedom but does not appreciate that it may result in more oppression under the sexual authority of Pepe El Romano – her lover. Bernarda, despite her tyrannical behaviour, is as much a victim of the patriarchy as her daughters, if not more as she has absorbed such oppressive values into her own psyche. The different views and lives of the women reflect off each other in the play.

Fundamentally, Lorca, remarkably whilst being a man himself, strikingly presents life for women in rural Spain and the psychological and philosophical impact of oppression – perhaps because he, himself, was a homosexual who would later be killed under Franco’s fascist regime.

Twitter: @English_WHS, @SpanTweetsWHS

Hearing in colour – Synesthesia and musical composition

What if we heard music and at the same time could see colours? What if we composed music to create colours? Louisa (Year 12) investigates synesthesia and musical composition.

Synesthesia is the neurological condition where the stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in another. There are many different types, however common examples include grapheme-colour synesthesia where letters and numbers are seen as clearly coloured and chromesthesia where different musical keys, notes and timbres elicit specific colours and textures in one’s minds’ eye. For example, some synesthetes may clearly see the musical note F as blue or Wednesday as dark green or the number 6 as tasting of strawberries.

How some synesthetes may experience letters and numbers

Whilst some synesthetic associations are more common than others, it is possible for them to occur between any number of senses or cognitive pathways.

The definitive cause of synesthesia is not yet known, however most neuroscientists agree it is caused by excess interconnectivity between the visual cortex of the brain and the different sensory regions. It is estimated that around 1 in 2000 people experience true synesthesia and it is more common in women than men, however it may be more common as many who have it may not consider it a condition and leave it unreported.

One area in which there is a large concentration of synesthetes is in the arts; notable synasthetes include composers Olivier Messiaen, Franz Liszt and Jean Sibelius, Russian author Vladimir Nabokov, artists Vincent van Gogh and David Hockney, jazz legend Duke Ellington and actress Marilyn Monroe.

Composers who experienced chromesthesia (the type of synesthesia where musical keys and notes and sometimes intervals are associated with colours) often actively incorporated it into their works and in some cases made it central to their compositions.

How musical keys may be seen by people with chromesthesia

French composer Olivier Messiaen (1908-1992) was quoted as saying “I see colours when I hear sounds but I don’t see colours with my eyes. I see colours intellectually, in my head.” He said that if a particular sound complex was repeated an octave higher, the colour he saw persisted, but grew paler. If the octave was lowered the colour darkened. Only if the sound complex was transposed into a different pitch did the colour inside his head radically change.

For Messiaen, it was vital that performers and listeners of his music understood the colours he was portraying in his compositions and he did this by writing instructions in his scores. For example, pianists in the second movement (Vocalise) of his Quartet for the End of Time, written in a prisoner of war camp in 1940, are told to aim for “blue-orange” chords. Similarly, musicians playing ‘Couleurs de la cité céleste’ are instructed to conjure “yellow topaz” for one chord cluster and “bright green” for the next as well as many more examples.

Another composer who actively made use of his synesthesia is Finnish composer Jean Sibelius (1865-1957). Sibelius wrote that “music is for me like a beautiful mosaic which God has put together”. He said if he heard a violin playing a certain piece of music, he would see a corresponding colour such as colour of the sky at sunset in the summer. The colour would be uniquely specific and would only be triggered by a particular sound. This means many of his compositions have strong links to imagery experienced by Sibelius which may account for the strong emotional pulse that can be heard throughout his compositions.

Similarly, Franz Lizst (1811-1886) was known to use his synesthesia in his orchestral compositions, saying “O please, gentlemen, a little bluer, if you please! This tone type requires it!” or “That is a deep violet, please, depend on it! Not so rose!” Initially the orchestra believed Liszt was just joking before realising Lizst did in fact see colours for each tone and key.

It can be difficult to understand the experiences of true synesthetes when not having the condition oneself, however this can be made easier by looking at the works of synesthetic artist Wassily Kandinsky (1866-1944), the first abstract painter. Instead of using his synesthesia to compose new music, he would create artwork based on the music he heard.

Kandinsky discovered his synesthesia at a performance of Wagner’s opera Lohengrin in Moscow. He said “I saw all my colours in spirit, before my eyes. Wild, almost crazy lines were sketched in front of me.” In 1911, after studying and settling in Germany, he was similarly moved by a Schoenberg concert of 3 Klavierstücke Op. 11 and finished painting Impression III (Konzert) two days later.

Impression III (Konzert) – Kandinsky

When studying the music of known synesthetic composers, it’s important we bear in mind what the composers were experiencing when writing it as it adds another dimension to the music and can change the overall interpretation. It also offers a fascinating link between music and art, adding increased complexity to the process of musical composition.

Twitter: @Music_WHS

“Why are German Kindergartens so successful?”

Germany

Sofia Justham Bello, Year 12, tells us more about a recent work experience trip to a Kindergarten in Germany, focusing on the differences in educational practice from her own education.

This blog is based on my work experience in a German Kindergarten in Schwäbisch Hall, Southern Germany, which was arranged by the Goethe Institut. The Goethe Institut promotes the study of German abroad and encourages international cultural exchange, through language lessons, lectures, courses and libraries.

I entered a logo-designing competition in September for the Friends Of The Goethe Institut London, and won, along with 9 other 16 year olds in the UK, a work-shadowing trip to Germany. I worked at a Kindergarten which had children from ages three to six (it involved a lot of singing, going for walks in the forest, and even carpentry!) I really liked how the children there had the freedom to play and the multi-cultural aspect of the Kindergarten was uplifting, given events that are happening in the world today. I also might want to work in education so it was a useful experience.

The system of the German Kindergarten is important to understand why my work experience there was so inspiring. It is commonplace knowledge that the actual word “Kindergarten” in German, literally denotes as a “children’s garden”. Kindergartens were established as a pre-school educational approach based on social interaction through singing, playing and more practical activities such as painting, and arts and crafts.

Arts and crafts: “Basteln” are very important to German Children and integral to German culture; when I worked at the Kindergarten the children were preparing “Laterne”, lanterns, for the traditional festival for children- ‘St. Martin’s Day”; whereon the 11th of November Kindergarten children walk the streets holding their lanterns that they made.

These creative teaching methods ensure that children interact with others and thus transition successfully from home to school.

Historically, such “institutions” for young children originated from Bavaria in Germany and arose in the late 18th Century in order to help families support their children whilst both parents worked. Nonetheless they were not called “Kindergartens” at this point. In fact the term was later coined by Friedrich Fröbel who created a “play and activity” institute in 1837. He renamed his institute Kindergarten in 1840, reflecting his belief that children should be nurtured and nourished “like plants in a garden”.

This idea of children flourishing “like plants in a garden”, and the independence connoted with this image, was evident in the Kindergarten that I worked in in Schwäbisch Hall. On arrival I noticed the immediate differences between my nursery experience, and the “Kindergarten” experience the children were receiving in Germany. The teachers working there were shocked that I began school at the age of four, whereas in Germany, Kindergarten is a process that goes from ages three all the way up to six year olds. The site also had a “Kinderkrippe” upstairs, which is a crèche, so essentially up to six years of your life could take place there, which is a huge part of your childhood. Hence the responsibility the teachers have to shape their childhood is huge.

The teaching approach there encourages the young children to think and act independently. Moreover there is a huge focus on nature, and everyday the children would go on a walk and thus connect with nature. The first day was “Waldtag”, day of the forest, which is a national scheme run by the government to encourage children to explore German forests. We spent a long day walking, running, and feeding animals, like goats and sheep. In the afternoon we stopped to have a break and the children were able to play. One child approached me repeatedly saying the word “Säge” which means a “Saw”; I thought that this had got lost in translation, but to my surprise the children began to saw at the forest ground, constructing small houses out of branches, with minimal supervision from the teachers.

It was evident, from just one such example that the children there have more freedom to play, no pressure to read or write (which naturally comes later on) and thus their childhood is extended and their collaborative skills are improved. The older children took care of the younger ones, and overall it was an extremely inspiring experience

Here is a link to the Goethe Institut Website: https://www.goethe.de/en/index.html

@German_WHS