Was Hitler’s greatest mistake at Dunkirk?

Georgia, Year 13, explores the British retreat at Dunkirk and argues that Hitler’s greatest mistake was at this point in the war.

DunkirkDunkirk was the climactic moment of one of the greatest military disasters in history. From May 26 to June 4, 1940, an army of more than three hundred thousand British soldiers were essentially chased off the mainland of Europe, reduced to an exhausted mob clinging to a fleet of rescue boats while leaving almost all of their weapons and equipment behind for the Germans to pick up. The British Army was crippled for months, and had the Royal Air Force and Royal Navy failed, Germany would have managed to conduct their own D-Day, giving Hitler the keys to London. Yet Dunkirk was a miracle, and not due to any tactical brilliance from the British.

In May 1940, Hitler was on track to a decisive victory. The bulk of the Allied armies were trapped in pockets along the French and Belgian coasts, with the Germans on three sides and the English Channel behind. With a justified lack of faith in their allies, Britain began planning to evacuate from the Channel ports. Though the French would partly blame their defeat on British treachery, the British were right. With the French armies outmanoeuvred and disintegrating, France was doomed. And really, so was the British Expeditionary Force. There were three hundred thousand soldiers to evacuate through a moderate-sized port whose docks were being destroyed by bombs and shells from the Luftwaffe. Britain would be lucky to evacuate a tenth of its army before the German tanks arrived.

Yet this is when the ‘miracle’ occurred. But the miracle did not come in the form of an ally at all. Instead, it came from the leader of the Nazis himself. On May 24th, Hitler and his high command hit the stop button. Much to their dissatisfaction, Hitler’s tank generals halted their panzer columns which could have very easily sliced like scalpels straight to Dunkirk. The Nazi’s plan now was for the Luftwaffe to pulverise the defenders until the slower-moving German infantry divisions caught up to finish the job. It remains unclear why Hitler issued the order. It is possible that he was worried that the terrain was too muddy for tanks, or perhaps he feared a French counterattack. Hitler later claimed, at the end of the war, that he had allowed the British Expeditionary Force to get away simply as a gesture of goodwill and to try to encourage Prime Minister Winston Churchill to make an agreement with Germany that would allow it to continue its occupation of Europe. Whatever the reason, while the Germans dithered, the British moved with a speed that Britain would rarely display again for the rest of the war.

Not just the Royal Navy was mobilised. From British ports sailed yachts, fishing boats and rowing boats; anything that could sail was pressed into service.

Under air and artillery fire, the motley fleet evacuated 338,226 soldiers. As for Britain betraying its allies, 139,997 of those men were French soldiers, along with Belgians and Poles. Even so, the evacuation was incomplete. Some 40,000 troops were captured by the Germans. The Scotsmen of the 51st Highland Division, trapped deep inside France, were encircled and captured by the 7th Panzer Division commanded by Erwin Rommel. The British Expeditionary Force did save most of its men, but almost all its equipment—from tanks and trucks to rifles—was left behind.

In spite of this, the British would and could continue to view the evacuation of Dunkirk as a victory. Indeed, the successful evacuation gave Britain a lifeline to continue the war. In June 1940, neither America nor the Soviets were at war with the Axis powers. With France gone, Britain, and its Commonwealth partners stood alone. Had Britain capitulated to Hitler or signed a compromise peace that left the Nazis in control of Europe, many Americans would have been dismayed—but not surprised.

Hitler’s greatest mistake was giving the British public enduring hope, ruling out any chance of them suing for peace. He gave them an endurance that was rewarded five years later on May 8, 1945, when Nazi Germany surrendered. A British writer, whose father fought at Dunkirk wrote that the British public were under no illusions after the evacuation. “If there was a Dunkirk spirit, it was because people understood perfectly well the full significance of the defeat but, in a rather British way, saw no point in dwelling on it. We were now alone. We’d pull through in the end. But it might be a long grim wait…”

How is the Turing Test Relevant to Philosophy?

Kira, Year 13, looks at the Turing test and how criticisms of it bring new ideas and concepts into the philosophy of mind.

Alan Turing
Alan Turing

As emerging areas of computer science such as Artificial Intelligence (AI) continue to grow, questions surrounding the possibility of a conscious computer are becoming more widely debated. Many AI researchers have the objective of creating Artificial General Intelligence: AI that has an intelligence, and potentially a consciousness, similar to humans. This has led many to speculate about the nature of an artificial mind, and an important question arises in the wake of this modern development and research: “Can computers think?”

Decades before the development of AI as we know it today, Alan Turing attempted to answer this question in his 1950 paper Computing Machinery and Intelligence. He developed the famous Turing test as a way to evaluate the intelligence of a computer. Turing proposed a scenario in which a test subject would have two separate conversations: one with another human, and one with a machine designed to give human-like responses. These conversations would take place through a text-channel so the result would not be affected by the machine’s ability to render speech. The test subject would then be asked to determine which conversation took place with a machine. Turing argued that if they are unable to reliably distinguish the machine from the other human, then the machine has ‘passed the test’, and can be considered intelligent.

At the start of his essay, Turing specifies that he would not be answering “Can computers think?”, but a new question that he believed we are able to answer: “Are there imaginable digital computers which would do well in the imitation game?” However, Turing did believe that a computer which was able to succeed in ‘the imitation game’ could be considered intelligent in a similar way to a human. In this way, he followed a functionalist idea about the mind – identifying mental properties though mental functions, such as determining intelligence through the actions of a being, rather than some other intrinsic quality of a mental state.

Many scholars have criticised the Turing test, such as John Searle, who put forward the Chinese Room Argument and the idea of ‘strong AI’ to illustrate why he believed Turing’s ideas around intelligence to be false. The thought experiment looks at a situation where a computer is produced that behaves as though it understands Chinese. It is, therefore, able to communicate with a Chinese speaker and pass the Turing test, as it convinces the person that they are talking to another Chinese-speaking human. Searle then asks whether the machine really understands Chinese, or if it is merely simulating the ability to speak the language. The first scenario is what Searle calls ‘strong AI’, referring to the latter as ‘weak AI’.

In order to answer his question, Searle illustrates a situation in which an English-speaking human is placed in a room with a paper version of the computer program. This person, given sufficient time, could be handed a question written in Chinese and produce an answer by following the program’s instructions step-by-step, in much the same way as a computer does. Although this person is hence able to communicate with somebody speaking Chinese, they do not actually understand the conversation that is taking place, as they are simply following instructions. In the same way, a computer able to communicate in Chinese cannot be said to understand the language. Searle argues that without this understanding, a computer should not be described as ‘thinking’, and as a result should not be said to have a ‘mind’ or ‘intelligence’ in a remotely human way.

Searle’s argument has had a significant impact on the philosophy of mind and has come to be viewed as an important argument against functionalism. The thought experiment provides opposition to the idea that the mind is merely a machine and nothing more: if the mind were just a machine, it is theoretically possible to produce an artificial mind that is capable of perceiving and understanding all that it sees around it. According to Searle, this is not a possibility. However, many people disagree with this belief – particularly as technology develops ever further, the possibility of a true artificial mind seems more and more likely. Despite this, Searle’s Chinese Room argument continues to aid us in discussions around how we should define things such as intelligence, consciousness, and the mind.

In this way, both the Turing test and Searle’s critique of it shed new light onto long-standing philosophical problems surrounding the nature of the human mind. They serve to help bring together key areas of computer science and philosophy, encouraging a philosophical response to the modern world, as well as revealing how our new technologies can impact philosophy in new and exciting ways.

Invention through desperation – military medical advancements

Military

Jessica, Year 13, explores military medical advancements in recent conflicts, discussing their impact and whether the nature of war acts as an inspiration for innovation.

In 2001, the conflict in Afghanistan began, continuing until a majority of British troops withdrew in the final months of 2014. During these years, 6,386 British personnel were injured, with 28 fatalities, leaving the survival rate at 99.6%.

This was unheard of in previous wars and a major success story for military medicine. However, the injuries and trauma to the soldiers during this period of time increasingly involved haemorrhaging and amputations due to gunshot wounds and IEDs (also known as improvised explosive devices – a type of unconventional crude homemade bomb). These IEDs cause extensive blood loss which has been attributed to 50% of combat deaths since World War Two. In order for these soldiers to survive, a change had to be made in the form of military medicine to preserve life and limb. There are three major advancements in military trauma medicine which all arose from the need to problem-solve solutions to the new injuries personnel and the medics were now witnessing.

The first is haemostatic dressings. During the period of the Afghanistan conflict, two new dressings were developed: XSTAT and QuickClot powder which contain components such as fibrinogen and thrombin catalysing the natural coagulation response. XSTAT uses 92 medical sponges in a pocket-sized injector to pack an open wound and halt bleeding within fifteen seconds. XSTAT increases the chance of survival and holds pressure until the patient can reach a medical centre. They also contain a molecule which is visible on an X-ray to ensure all sponges are removed later to prevent infection.

Secondly, there was a development in the traditional tourniquet. A tourniquet is a constricting or compressing device used to control venous and arterial blood flow to a portion of an extremity for a period of time. This is possible because it creates pressure equal to or higher than the patient’s systolic blood pressure. The single hand tie tourniquet is a development from the original tourniquet used by army medics which had to be applied by the medic and thus were only carried by them. Without the patient being able to apply their own tourniquet, crucial time and blood was lost whilst the medic reached the injured individual, reducing their chance of survival as well as increasing the complexity of their treatment and injuries. This is when the Clinical Application Tourniquet (CAT) was developed and introduced into the US Army in 2005. It was the first single-hand tie tourniquet, allowing the soldiers to treat their own injuries immediately until the medic could attend and provide more advanced care. The tourniquet distributes pressure over a greater area which is advantageous because it reduces the underlying tissue and nerve damage, preventing it from becoming ischemic, a deficient supply of blood, whilst remaining effective. This decrease in time before a tourniquet is used has decreased the mortality rate due to haemorrhaging by 85%.

A third category of advancements is in the use of blood and the way it is transported. Blood and blood products, such as platelets, are crucial in the treatment of haemorrhaging and amputations. However, in order for it to be viable for transfusion, it must be maintained in a cool, constant environment, far from the natural one in Afghanistan. This was previously a significant disadvantage and contributed to the low survival rates for haemorrhaging but improved with the development of the blood container. The Golden-Hour mobile blood container stores up to four units of blood and platelets at[1]the required temperature of six and two degrees Celsius respectively, for 72 hours without electricity, batteries or ice to aid emergency medics. Crucially, this enabled blood to be brought forward to the battlefield rather than stored at the field hospital.

The environment of the military and the nature of its role means that trauma medicine needs to evolve to deal with the style of injuries it is experiencing: invention through desperation. However, it is important that the care not only reflects the immediate treatment of the patient but also considers their long-term care to ensure they can achieve a high quality of life post-conflict.

What would happen if there was no stigma around mental illness?

Mental Illness

Emily, Year 12, explores why there is a stigma around mental illnesses, how we can get rid of this stigma, and what effect the stigma has on society.

Mental illness is not just one disorder – and many people know that – but what they don’t understand is quite how expansive the list of disorders is. As young girls, we are taught about anxiety, body dysmorphic disorder, depression, addiction, stress, and self-harm but the likelihood is that we know – from personal experience, through friends, family or even social media – that many more mental illnesses exist. For example: bipolar disorder, obsessive-compulsive disorder, schizophrenia, autism and ADHD. Chances are, we all know someone with mental illness whether we know or not – the majority of the time these people function the same way that people with no mental illness do. So why is there such a stigma around mental illness and how can we get rid of the stigma?
When the AIDS epidemic started in the early 1980s, the disease was only affecting minority groups of people who already faced criticism. The disease only furthered this and made the patients virtual pariahs until advocacy groups and communities protested to expand awareness and pressured the U.S. government to fund research for the disease and its cure. In only seven years, scientists were able to: identify that the cause of AIDS was the Human immunodeficiency virus (HIV), create the ELISA test to detect HIV in the blood and establish azidothymidine (AZT) as the first antiretroviral drug to help those suffering from HIV/AIDS. This is a prime example of how public knowledge can lead to science pushing the boundaries of their knowledge and finding treatments. Along with treatments eliminating symptoms, they also eliminate the stigma as more and more people are learning about the disease. So why can’t this be the case for mental illness?

In a time when science wasn’t breaking new boundaries every day, and knowledge wasn’t being distributed properly, it is easy to see why those with such complicated illnesses were feared and had such a stigma surrounding them. However, now when the greatest barrier is access to treatments and not the science, and the education about the subject is as high as it has ever been, it is hard to see why there is still such shame in having these illnesses.

But what if there was no stigma? We would have early identification and intervention in the form of screening mechanisms in primary care settings such as GP, paediatric, obstetrics, and gynaecological clinics and offices as well as schools and universities. The goal would be to screen those who are at risk for or are having symptoms of mental illness and engage the patients in self-care and treatment before the illness severely affects their brains, and lives. We would also have community-based comprehensive care for those who are in more advanced stages of illness. This will support people who are unable to care for themselves and who may otherwise end up homeless, in jail or in mental hospitals.
For example: victims of trauma would be treated for PTSD along with any physical injuries while in the hospital to target PTSD before any symptoms started occurring and the patient could hurt themselves or others; first responders would have preventative and decompression treatments routinely administered to treat PTSD before waiting to see who may or may not show symptoms; mothers would be treated for pre/post-partum depression as a part of pre/post-natal check-ups instead of waiting and potentially harming themselves or their baby. Children with learning disabilities would be identified early on so they could get cognitive training, and emotional support to prevent counterproductive frustration due to something they cannot control.

Medical economists have shown that this method of proactive mental healthcare will actually reduce the cost of delivering it. It will also relieve emotional stress (for the patient and their family), financial burden for treatment, and will reduce the occurrence of many of the very prevalent social problems. We all know about the many mass shootings that occur regularly and a great deal of these crimes have been perpetrated by young males who have an untreated mental illness which have presented symptoms for long before the crime was committed – not that I am excusing their behaviour in any way.

As a worldwide community, we must be able to recognise mental illness for what it is – a medical condition that can be treated, be that with behavioural or cognitive therapy or with medication. In order to dissolve the stigma, we must be involved, ask questions, be kind, be compassionate, and make it our own business. There is only so much science can do if people are not willing to take the help they are being given – they need to want to get better. The only way this will happen is if we all help to make it known that having a mental illness is not a bad thing, and that it is easily treatable, and that they are no different from anyone else.

Should we reclaim the asylum?

Asylum

Tara, Year 13, explores whether the asylum would provide the best care for those with mental illnesses or whether it should be left in the past.

AsylumWhen someone says asylum in the context of psychology, what do you immediately think of? I can safely assume most readers are picturing haunted Victorian buildings, animalistic patients rocking in corners and scenes of general inhumanity and cruelty. However, asylum has another meaning in our culture. Asylum, when referring to refugees, can mean sanctuary, hope and care. Increasingly people are exploring this original concept of asylum, and whether we, in a time when mental illness is more prevalent than ever, can reclaim the asylum? Or is it, and institutional in general, confined to history?

In the last 40 years, there has been a shift towards, “care in the community” and deinstitutionalization, facilitated by the development of various new medications and therapies. This has undeniably led to significant improvements in many individual’s mental wellbeing, better protected their human rights and reduced stigmatisation.

However, it also has led to significant cuts in facilities for those unable to transition into society, with almost no long-term beds available in mental health hospitals or inpatient units. Whilst this has left some dependent on family and friends for support, many have ended up in prison or homeless, with a third of the homeless population estimated to be suffering from schizophrenia or bipolar disorder. Some would, therefore, argue that a reinvention and rebranding of the asylum could provide long term care for severely and chronically ill patients, who even with intensive therapies and drugs, are unlikely to reintegrate back into society.

Designed in collaboration with patients and experts, these ‘asylums’ are not necessarily all intended to be large scale hospitals. The system is intended to be flexible, varied and voluntary where possible.  By providing more community-based institutions, with as low a density of residents as possible, we can maximise privacy and trained staff can focus on each patient as individuals in a less punishing environment, removing many of the factors contributing to their distress, and overall improving their quality of life.

Arguably patients may become less isolated, as they are given a safe space to socialize and engage with people they can relate to and support. Unlike temporary units and mental health wards, these institutions would provide long term stability and respite, away from the continuous turbulence and disruption typical of hospitals.

Lastly many will benefit from the structure, intensive therapy and monitoring of medication provided by institutionalisation, which greatly reduces the likelihood of individuals harming themselves or relapsing. Some would argue the notion is too idealistic and that current models provide a utopian ideal of mental health care, and whilst seemingly unattainable it demonstrates to policymakers the importance and possibility of a change in direction.

This reinvention would require considerable time, money and commitment, especially as mental health care has been historically underfunded.  However, in this ever-changing climate the asylum might seem like a taboo topic of the past, but if we can shift our focus, if we can overcome our assumptions and reclaim the asylum in both meaning and function, it could be a thing of the future.

Is nihilism really hopeless?

Nietzsche

Anya, Year 13, explores what characterises nihilism and investigates the worth of nihilism; it is hopeless or actually positive?

Nihilism, according to the Oxford Dictionary, is the rejection of all religious and moral principles in the belief that life is meaningless, which, strictly speaking, does sound quite despairing. Yet, however hopeless the Oxford Dictionary would have us think it is, nihilism can allow (perhaps surprisingly) room for personal, moral and spiritual growth.

Nihilism undoubtedly stems from pessimism. Indeed, Nietzsche, the German philosopher and scholar who is often associated with it, called nihilism “the most extreme form of pessimism”.

The path to becoming a nihilist starts with weariness and a loss of faith in social, legal and cultural values widely held in our society. When people begin to feel alienated from their values and do not replace their value system with any other known system, such as a new religion or political philosophy, they become nihilists. They are disappointed with the egoistic nature of ‘truth’ and ‘morality’ but at the same time recognise that those things are necessary.

Often, free will seems contradictory: we depend on a value system that doesn’t exist and have depended on previous value systems which we have seen crumble. Each time we encounter a new system we conform to those values, we feel bound by them and those of us who rebel, i.e. criminals, are cast out from society. If none of these systems ever even existed, as the nihilist claims, we are just going around in a cycle of limiting our life choices for no reason. Basic values such as getting an education or a good job are placed in a sphere far beyond what is reachable.

The nihilist realises that every time someone begins to talk about “the real world” they are merely talking about a fictitious world because, from a nihilist perspective, every category used to measure and qualify our world is fake. In summary, the beginning of a nihilist lifestyle sounds a lot like the act of giving up and becoming a recluse, not to mention very dejected.

However, Nietzsche claims that nihilism is a necessary step in the transition to a devaluation of all values one holds. He outlines two distinct forms of the philosophy: passive and active.

Passive nihilism is characterised by a weak will. This is the kind of nihilism commonly made reference to in popular culture, which brings about little more than mental exhaustion and no change. A passive nihilist would see the emptiness of general external values (such as various social constructs) and project that onto individual internal beliefs (such as what you feel is good and bad), which results in a loss of personal authority. This type of nihilism can truly be called hopeless. Passive nihilism plagues the mind, often resulting in the person attempting to remove all responsibility from themselves, as the mind seeks to hold onto something that isn’t arbitrary, which can lead to one searching for hollow escapes such as excessive drinking, meaningless relationships and general “self-narcotisation”. Yet any attempts to escape nihilism without actually re-evaluating one’s own values only makes it worse.

On the other hand, active nihilism is characterised by a strong will. This constructive nihilism goes beyond simple judgement and moves on to action, specifically, the destruction of the remaining, meaningless status quo and the rebuilding of values and ethics through thought and reason. The will is made stronger still by forcing the recognition that practically all our value systems are in fact devoid of meaning, whilst at the same time having the power to accept that this meaninglessness serves a purpose, as ironic and oxymoronic as that may seem. Nietzsche claims that this form of thought is “a divine way of thinking”. An active nihilist will recognise the necessity of the lies and oversimplifications of life and begin to value the irrationality of how we live, as these are the conditions which must exist in order for people to truly have the ability to think for themselves.

It is important to note that nihilism does not replace values, at least according to Nietzsche, but rather makes room for those values to be broken away and reconstructed. Nietzsche stressed that nihilism is merely a means to an end, and not an end in itself. In this way, it becomes a form of existential nihilism, a contradictory principle in which we accept that values are meaningless and fake whilst building new ones for ourselves. Active nihilism opens doors to revaluating and more importantly, constructing new values for ourselves. In this way, we achieve a sense of freedom as well as infinitely greater insight into ourselves and the people around us.

Thus, nihilism is not inherently hopeless, instead, it can be said to create hope, as it pushes us to change, ask questions and find answers for ourselves. Active nihilism is certainly necessary for any kind of social, political or religious revolution. To paraphrase Sartre, if our life is the only thing we get to experience, then it’s the only thing that matters. If the universe has no principles then the only principles relevant are the ones we decide on. If the universe has no purpose, then we get to dictate what its purpose is. So, whilst a loss of faith may lead to nihilism, nihilism leads to new hope.

 

 

China: Should we be worried?

Sofia, Year 13, discusses whether the increasing power of China is something that should be concerning the global community.

China is increasingly becoming a hot topic amongst economists as we see the developing influence it is having on the western world. We are seeing a new form of colonialism – neo-colonialism – whereby China has (by being the second largest economy in the world) significant power over countries. One would expect this to be only over lower income countries; however, China is even beginning to power the West’s markets and economies and even has the power to have political control.

It is evident that many African countries increasingly depend on China as a trading partner as trade was worth $10.5 billion in 2000, $40 billion in 2005 and $166 billion in 2011. China is currently Africa’s largest trading partner, having surpassed the US in 2009. However, dependency on China extends more deeply than trade. China has been seen to be providing many African countries with loans in the form of top-down development projects. Examples such as this can be seen in a $3.2 billion railway in Kenya, trekking 300 miles from Nairobi to Mombasa, which is faster than the equivalent distance of a train journey from Philadelphia to Boston. China has also built a $526 million dam in Guinea and a $475 million light rail system in Ethiopia, which is the first of its kind in sub-Saharan Africa. These infrastructure projects are effectively seen to be loans however these loans are extremely risky, with low or no interest, where often most of the money is not completely paid back. This shows that China is not investing in these projects for economic benefit, but to have leverage over a country. This allows China to have political leverage, especially in votes at UN conferences such as those involving the China/Taiwan governance issues or China’s allies such as North Korea.

In the most recent vote involving condemnation of North Korea, only 12 out of the 54 countries in Africa voted against China’s ally. It has also been found that if a country recognises Taiwan (which is under Chinese governance) as a country in its own right they receive 2.7 fewer Chinese infrastructure loans a year. Furthermore, if an African country voted overwhelmingly along with China in a UN General Assembly they receive 1.8 more infrastructure projects a year. This shows that increasingly in these vulnerable countries China is controlling their economies as well as their political views.

However, this is not only the case in low-income countries such as those in Africa, we have been seeing in recent years China is using a similar technique to have more influence over Europe. China is the EU’s largest provider of imports accounting for 20.3% in 2015. China has also invested a lot into Europe, arguably for profit however, some projects could also be for political influence even though European economies are significantly larger than those in Africa. Greece and Hungary worked together to prevent Europe condemning of a tribunal’s finding against China and its plan in the South China Sea. China has also recently invested half a billion euros into the Greek port of Piraeus and the Belgrade – Budapest railroad. China has also been seen to drive a wedge between the UK and the USA by decreasing trade between the two and siding with Europe on matters concerning Climate change. China has also been seen to exploit links with certain countries to make foreign policy hard in areas such as human rights.

It is clear China is having an increasing influence in countries everywhere, which is increasingly leading to the loss of democracy on the international stage. Countries should be weary of this increasing influence and so should decrease dependency on the super-power.

Does the Harkness Method improve our understanding of Maths?

Elena and Amelia, Y12 Further Mathematicians, explore how the Harkness Method has opened up a new way of thinking about Pure Maths and how it allows them to enhance their mathematical abilities.

For Further Maths A Level, the Maths department has picked a new style of teaching: the Harkness Method. It involves learning by working through problem sets. The problems give clues as to how to get to the answer and this is better than stating the rules and giving examples; we have to work them out ourselves. These problem sets are given for homework, and then we discuss them together during the next lesson by writing the answers on the board and comparing our results with each other.

Elena:

At the beginning of term, I found it quite challenging to complete exercises without knowing what rules I was expected to apply to the problems, as each question seemed to be completely different to the one preceding it. The tasks also require us to use our previous GCSE knowledge and try to extend it ourselves through trial and error and by applying it to different situations and problems. I found it difficult to understand how to apply a method to solve different problems as previously each problem came with a defined method.

Maths diagrams As the lessons progressed, I started enjoying this method of teaching as it allowed me to understand not only how each formula and rule had come to be, but also how to derive them and prove them myself – something which I find incredibly satisfying. I also particularly like the fact that a specific problem set will test me on many topics. This means that I am constantly practising every topic and so am less likely to forget it. Also, if I get stuck, I can easily move on to the next question.

Furthermore, not only do I improve my problem-solving skills with every problem sheet I complete, I also see how the other girls in my class think about each problem and so see how each question can be approached in more than one way to get the same answer – there is no set way of thinking for a problem.

This is what I love about maths: that there are many ways of solving a problem. Overall, I have grown to like and understand how the Harkness Method aims to challenge and extend my maths skills, and how it has made me improve the way I think of maths problems.

Amelia:

When I first started the Harkness approach for Pure Maths in September, I remember feeling rather sceptical about it as it was unlike any method of learning I had encountered before. To begin with, I found it slightly challenging to answer the questions without knowing what topic they were leading to and found confusing how each sheet contained a mixture of topics.

However, I gradually began to like this as it meant I could easily move on and still complete most of the homework, something which you cannot do with the normal method of teaching. Moreover, I found it extremely beneficial to learn the different topics gradually over many lessons as I think that this improved my understanding, for example for differentiation we learnt it from first principles which gave me the opportunity to comprehend how it actually works instead of merely just remembering how to do it.

Furthermore, I think that the best part of the Harkness Method is that you are learning many topics at a time which means that you cannot forget them as compared to in the normal method which I remember finding difficult when it came to revision for GCSEs as I had forgotten the topics I learnt at the beginning of Year 10. I also began to enjoy the sheets more and more because the majority of the questions are more like problem-solving which I have always found very enjoyable and helpful as it means you have to think of what you need to use instead of the question just simply telling you.

Moreover, I very much enjoyed seeing how other people completed the questions as they would often have other methods, which I found far easier than the way I had used. The other benefit of the lesson being in more like a discussion is that it has often felt like having multiple teachers as my fellow class member have all been able to explain the topics to me. I have found this very useful as I am in a small class of only five however, I certainly think that the method would not work as well in larger classes.

Although I have found the Harkness method very good for Pure Maths, I definitely think that it would work far less well for other parts of maths such as statistics. This is because I think that statistics is more about learning rules many of which you cannot learn gradually.

What are the links between romance languages and music?

Matilda, Year 13, investigates the links between romance languages and music to discover whether the learning of one can help in the understanding of the other.

Music and language

It is often said that music is the ‘universal language of mankind’, due to its great expressive powers which have the ability to convey sentiments and emotions.

But what are the connections between music and languages?

A romance language is a language derived from Latin and this group of languages has many similarities in both grammar and vocabulary. The 5 most widely spoken romance languages are Spanish (with 470 million speakers), Portuguese, French, Italian and Romanian.

There are 3 main connections between languages and music:

 

The first of these is the role of melody in recall:

There is a link between languages and music in remembering words. This is shown in a study where words were better recalled when learned as a song rather than a speech. This is because melody and rhythm give the memory cues to help recall information.[1]

Language, music, and emotion:

The British anthropologist and evolutionary psychologist who specialises in primate behaviour, Robin Dunbar, says that music and language help to knit people together in social groups. This is because musicians process music as a language in their heads. Studies have shown the planum temporal in the brain is active in all people whilst listening to music.

However, in non-musicians, the right-hand side was the most active, meanwhile, in musicians, the left side dominated, this is the side believed to control language processing. This shows that musicians understand music as a language in their brain.

In another study, scientists analysed the Broca’s area, which is crucial in language and music comprehension. It is also responsible for our ability to use syntax. Research has shown the in the Broca’s area of the brain, musicians have a greater volume of grey matter, suggesting that it is responsible for both speech and music comprehension.

The relationship between music and languages:

Brain and Languages Both music and languages share the same building blocks as they are compositional. By this, I mean that they are both made of small parts that are meaningless alone but when combined can create something larger and meaningful.

For example, the words ‘I’, ‘love’ and ‘you,’ do not mean much individually, however, when they are constructed in a sentence, carry a deep sentimental value. This goes the same for music notes, which when combined can create a beautiful, purposeful meaning.

Musical training has been shown to improve language skills.[2] In a study carried out in 2011, developmental psychologists in Germany conducted a study to examine the relationship between development of music and language skills. In the experiment, they separated children aged 4 into 2 groups, 1 of these groups receiving musical training, and one did not.

Later on, they measured their phonological ability (the ability to use and manipulate language) and they discovered the children who had received music lessons were better at this. Therefore, this shows that learning and understanding language can go hand in hand with musical learning and ability.

References: 

[1] See https://www.theguardian.com/teacher-network/2018/mar/14/sound-how-listening-music-hinders-learning-lessons-research
[2] See https://www.psychologytoday.com/intl/blog/the-athletes-way/201806/how-does-musical-training-improve-language-skills

The Brain Chemistry of Eating Disorders

Jo, Year 13, explores what is happening chemically inside the brains of those suffering from eating disorders and shows how important this science is to understanding these mental health conditions.

The definition of an eating disorder is any range of psychological disorders characterised by abnormal or disturbed eating habits. Anorexia is defined as a lack or loss of appetite for food and an emotional disorder characterised by an obsessive desire to lose weight by refusing to eat. Bulimia is defined as an emotional disorder characterised by a distorted body image and an obsessive desire to lose weight, in which bouts of extreme overeating are followed by fasting, self-induced vomiting or purging. Anorexia and bulimia are often chronic and relapsing disorders and anorexia has the highest death rate of any psychiatric disorder. Individuals with anorexia and bulimia are consistently characterised by perfectionism, obsessive-compulsiveness, and dysphoric mood.

Dopamine and serotonin function are integral to both of these conditions; how does brain chemistry enable us to understand what causes anorexia and bulimia?

Dopamine

Dopamine is a compound present in the body as a neurotransmitter and is primarily responsible for pleasure and reward and in turn influences our motivation and attention. It has been implicated in the symptom pattern of individuals with anorexia, specifically related to the mechanisms of reinforcement and reward in engaging in anorexic behaviours, such as restricting food intake. Dysfunction of the dopamine system contributes to characteristic traits and behaviours of individuals with anorexia which includes compulsive exercise and pursuit of weight loss.

In people suffering from anorexia dopamine levels are stimulated by restricting to the point of starving. People feel ‘rewarded’ by severely reducing their calorie intake and in the early stages of anorexia the more dopamine that is released the more rewarded they feel and the more reinforced restricting behaviour becomes. Bulimia involves dopamine serving as the ‘reward’ and ‘feel good’ chemical released in the brain when overeating. Dopamine ‘rushes’ affect people with anorexia and bulimia, but for people with anorexia starving releases dopamine, whereas for people with bulimia binge eating releases dopamine.

Serotonin

Serotonin is responsible for feelings of happiness and calm – too much serotonin can produce anxiety, while too-little may result in feelings of sadness and depression. Evidence suggests that altered brain serotonin function contributes to dysregulation of appetite, mood, and impulse control in anorexia and bulimia. High levels of serotonin may result in heightened satiety, which means it is easier to feel full. Starvation and extreme weight loss decrease levels of serotonin in the brain. This results in temporary alleviation from negative feelings and emotional disturbance which reinforces anorexic symptoms.

Tryptophan is an essential amino acid found in the diet and is the precursor of serotonin, which means that it is the molecule required to make serotonin. Theoretically, binging behaviour is consistent with reduced serotonin function while anorexia is consistent with increased serotonin activity. So decreased tryptophan levels in the brain, and therefore decreased serotonin, increases bulimic urges.

Conclusions

Distorted body image is another key concept to understand when discussing eating disorders. The area of the brain known as the insula is important for appetite regulation and also interceptive awareness, which is the ability to perceive signals from the body like touch, pain, and hunger. Chemical dysfunction in the insula, a structure in the brain that integrates the mind and body, may lead to distorted body image, which is a key feature of anorexia. Some research suggests that some of the problems people with anorexia have regarding body image distortion can be related to alterations of interceptive awareness. This could explain why a person recovering from anorexia can draw a self-portrait of their body image that is typically 3x its actual size. Prolonged untreated symptoms appear to reinforce the chemical and structural abnormalities in the brains seen in those diagnosed with anorexia and bulimia.

Therefore, in order to not only understand and but also treat both anorexia and bulimia, it is central to look at the brain chemistry behind these disorders in order to better understand how to go about successfully treating them.