Invention through desperation – military medical advancements

Military

Jessica, Year 13, explores military medical advancements in recent conflicts, discussing their impact and whether the nature of war acts as an inspiration for innovation.

In 2001, the conflict in Afghanistan began, continuing until a majority of British troops withdrew in the final months of 2014. During these years, 6,386 British personnel were injured, with 28 fatalities, leaving the survival rate at 99.6%.

This was unheard of in previous wars and a major success story for military medicine. However, the injuries and trauma to the soldiers during this period of time increasingly involved haemorrhaging and amputations due to gunshot wounds and IEDs (also known as improvised explosive devices – a type of unconventional crude homemade bomb). These IEDs cause extensive blood loss which has been attributed to 50% of combat deaths since World War Two. In order for these soldiers to survive, a change had to be made in the form of military medicine to preserve life and limb. There are three major advancements in military trauma medicine which all arose from the need to problem-solve solutions to the new injuries personnel and the medics were now witnessing.

The first is haemostatic dressings. During the period of the Afghanistan conflict, two new dressings were developed: XSTAT and QuickClot powder which contain components such as fibrinogen and thrombin catalysing the natural coagulation response. XSTAT uses 92 medical sponges in a pocket-sized injector to pack an open wound and halt bleeding within fifteen seconds. XSTAT increases the chance of survival and holds pressure until the patient can reach a medical centre. They also contain a molecule which is visible on an X-ray to ensure all sponges are removed later to prevent infection.

Secondly, there was a development in the traditional tourniquet. A tourniquet is a constricting or compressing device used to control venous and arterial blood flow to a portion of an extremity for a period of time. This is possible because it creates pressure equal to or higher than the patient’s systolic blood pressure. The single hand tie tourniquet is a development from the original tourniquet used by army medics which had to be applied by the medic and thus were only carried by them. Without the patient being able to apply their own tourniquet, crucial time and blood was lost whilst the medic reached the injured individual, reducing their chance of survival as well as increasing the complexity of their treatment and injuries. This is when the Clinical Application Tourniquet (CAT) was developed and introduced into the US Army in 2005. It was the first single-hand tie tourniquet, allowing the soldiers to treat their own injuries immediately until the medic could attend and provide more advanced care. The tourniquet distributes pressure over a greater area which is advantageous because it reduces the underlying tissue and nerve damage, preventing it from becoming ischemic, a deficient supply of blood, whilst remaining effective. This decrease in time before a tourniquet is used has decreased the mortality rate due to haemorrhaging by 85%.

A third category of advancements is in the use of blood and the way it is transported. Blood and blood products, such as platelets, are crucial in the treatment of haemorrhaging and amputations. However, in order for it to be viable for transfusion, it must be maintained in a cool, constant environment, far from the natural one in Afghanistan. This was previously a significant disadvantage and contributed to the low survival rates for haemorrhaging but improved with the development of the blood container. The Golden-Hour mobile blood container stores up to four units of blood and platelets at[1]the required temperature of six and two degrees Celsius respectively, for 72 hours without electricity, batteries or ice to aid emergency medics. Crucially, this enabled blood to be brought forward to the battlefield rather than stored at the field hospital.

The environment of the military and the nature of its role means that trauma medicine needs to evolve to deal with the style of injuries it is experiencing: invention through desperation. However, it is important that the care not only reflects the immediate treatment of the patient but also considers their long-term care to ensure they can achieve a high quality of life post-conflict.

What would happen if there was no stigma around mental illness?

Mental Illness

Emily, Year 12, explores why there is a stigma around mental illnesses, how we can get rid of this stigma, and what effect the stigma has on society.

Mental illness is not just one disorder – and many people know that – but what they don’t understand is quite how expansive the list of disorders is. As young girls, we are taught about anxiety, body dysmorphic disorder, depression, addiction, stress, and self-harm but the likelihood is that we know – from personal experience, through friends, family or even social media – that many more mental illnesses exist. For example: bipolar disorder, obsessive-compulsive disorder, schizophrenia, autism and ADHD. Chances are, we all know someone with mental illness whether we know or not – the majority of the time these people function the same way that people with no mental illness do. So why is there such a stigma around mental illness and how can we get rid of the stigma?
When the AIDS epidemic started in the early 1980s, the disease was only affecting minority groups of people who already faced criticism. The disease only furthered this and made the patients virtual pariahs until advocacy groups and communities protested to expand awareness and pressured the U.S. government to fund research for the disease and its cure. In only seven years, scientists were able to: identify that the cause of AIDS was the Human immunodeficiency virus (HIV), create the ELISA test to detect HIV in the blood and establish azidothymidine (AZT) as the first antiretroviral drug to help those suffering from HIV/AIDS. This is a prime example of how public knowledge can lead to science pushing the boundaries of their knowledge and finding treatments. Along with treatments eliminating symptoms, they also eliminate the stigma as more and more people are learning about the disease. So why can’t this be the case for mental illness?

In a time when science wasn’t breaking new boundaries every day, and knowledge wasn’t being distributed properly, it is easy to see why those with such complicated illnesses were feared and had such a stigma surrounding them. However, now when the greatest barrier is access to treatments and not the science, and the education about the subject is as high as it has ever been, it is hard to see why there is still such shame in having these illnesses.

But what if there was no stigma? We would have early identification and intervention in the form of screening mechanisms in primary care settings such as GP, paediatric, obstetrics, and gynaecological clinics and offices as well as schools and universities. The goal would be to screen those who are at risk for or are having symptoms of mental illness and engage the patients in self-care and treatment before the illness severely affects their brains, and lives. We would also have community-based comprehensive care for those who are in more advanced stages of illness. This will support people who are unable to care for themselves and who may otherwise end up homeless, in jail or in mental hospitals.
For example: victims of trauma would be treated for PTSD along with any physical injuries while in the hospital to target PTSD before any symptoms started occurring and the patient could hurt themselves or others; first responders would have preventative and decompression treatments routinely administered to treat PTSD before waiting to see who may or may not show symptoms; mothers would be treated for pre/post-partum depression as a part of pre/post-natal check-ups instead of waiting and potentially harming themselves or their baby. Children with learning disabilities would be identified early on so they could get cognitive training, and emotional support to prevent counterproductive frustration due to something they cannot control.

Medical economists have shown that this method of proactive mental healthcare will actually reduce the cost of delivering it. It will also relieve emotional stress (for the patient and their family), financial burden for treatment, and will reduce the occurrence of many of the very prevalent social problems. We all know about the many mass shootings that occur regularly and a great deal of these crimes have been perpetrated by young males who have an untreated mental illness which have presented symptoms for long before the crime was committed – not that I am excusing their behaviour in any way.

As a worldwide community, we must be able to recognise mental illness for what it is – a medical condition that can be treated, be that with behavioural or cognitive therapy or with medication. In order to dissolve the stigma, we must be involved, ask questions, be kind, be compassionate, and make it our own business. There is only so much science can do if people are not willing to take the help they are being given – they need to want to get better. The only way this will happen is if we all help to make it known that having a mental illness is not a bad thing, and that it is easily treatable, and that they are no different from anyone else.

The Brain Chemistry of Eating Disorders

Jo, Year 13, explores what is happening chemically inside the brains of those suffering from eating disorders and shows how important this science is to understanding these mental health conditions.

The definition of an eating disorder is any range of psychological disorders characterised by abnormal or disturbed eating habits. Anorexia is defined as a lack or loss of appetite for food and an emotional disorder characterised by an obsessive desire to lose weight by refusing to eat. Bulimia is defined as an emotional disorder characterised by a distorted body image and an obsessive desire to lose weight, in which bouts of extreme overeating are followed by fasting, self-induced vomiting or purging. Anorexia and bulimia are often chronic and relapsing disorders and anorexia has the highest death rate of any psychiatric disorder. Individuals with anorexia and bulimia are consistently characterised by perfectionism, obsessive-compulsiveness, and dysphoric mood.

Dopamine and serotonin function are integral to both of these conditions; how does brain chemistry enable us to understand what causes anorexia and bulimia?

Dopamine

Dopamine is a compound present in the body as a neurotransmitter and is primarily responsible for pleasure and reward and in turn influences our motivation and attention. It has been implicated in the symptom pattern of individuals with anorexia, specifically related to the mechanisms of reinforcement and reward in engaging in anorexic behaviours, such as restricting food intake. Dysfunction of the dopamine system contributes to characteristic traits and behaviours of individuals with anorexia which includes compulsive exercise and pursuit of weight loss.

In people suffering from anorexia dopamine levels are stimulated by restricting to the point of starving. People feel ‘rewarded’ by severely reducing their calorie intake and in the early stages of anorexia the more dopamine that is released the more rewarded they feel and the more reinforced restricting behaviour becomes. Bulimia involves dopamine serving as the ‘reward’ and ‘feel good’ chemical released in the brain when overeating. Dopamine ‘rushes’ affect people with anorexia and bulimia, but for people with anorexia starving releases dopamine, whereas for people with bulimia binge eating releases dopamine.

Serotonin

Serotonin is responsible for feelings of happiness and calm – too much serotonin can produce anxiety, while too-little may result in feelings of sadness and depression. Evidence suggests that altered brain serotonin function contributes to dysregulation of appetite, mood, and impulse control in anorexia and bulimia. High levels of serotonin may result in heightened satiety, which means it is easier to feel full. Starvation and extreme weight loss decrease levels of serotonin in the brain. This results in temporary alleviation from negative feelings and emotional disturbance which reinforces anorexic symptoms.

Tryptophan is an essential amino acid found in the diet and is the precursor of serotonin, which means that it is the molecule required to make serotonin. Theoretically, binging behaviour is consistent with reduced serotonin function while anorexia is consistent with increased serotonin activity. So decreased tryptophan levels in the brain, and therefore decreased serotonin, increases bulimic urges.

Conclusions

Distorted body image is another key concept to understand when discussing eating disorders. The area of the brain known as the insula is important for appetite regulation and also interceptive awareness, which is the ability to perceive signals from the body like touch, pain, and hunger. Chemical dysfunction in the insula, a structure in the brain that integrates the mind and body, may lead to distorted body image, which is a key feature of anorexia. Some research suggests that some of the problems people with anorexia have regarding body image distortion can be related to alterations of interceptive awareness. This could explain why a person recovering from anorexia can draw a self-portrait of their body image that is typically 3x its actual size. Prolonged untreated symptoms appear to reinforce the chemical and structural abnormalities in the brains seen in those diagnosed with anorexia and bulimia.

Therefore, in order to not only understand and but also treat both anorexia and bulimia, it is central to look at the brain chemistry behind these disorders in order to better understand how to go about successfully treating them.

 

How fungi help trees to communicate

Freya, Year 13, explores how trees are able to communicate and help each other using a network of fungi in the soil.

Underneath your feet there could be possibly 300 miles of fungi stacked in the soil. This special network of fungi , called the mycorrhizal network , brings together fungi and trees in a symbiotic relationship which helps trees to communicate, coined the ‘wood wide web’. You may have unknowingly seen mycorrhizae before; it is long and white and looks a bit like silly string.

When a tree seed is germinating, its roots grow towards the fungi in the soil. In return for nutrients and water from the fungi, trees send sugars down to them. This is of important value to fungi as they cannot photosynthesise (and so make their own sugars, which are needed for growth). Not only does the network connect the fungi and trees, but also the different trees in a given area. All the trees whose roots grow into mycorrhizae are linked in a network. This allows the trees to communicate.

Using the mycorrhizal network, a tree that has been taken over by a certain pest can send danger signals to other trees. When other trees pick up this signal, they release their own chemicals above ground to attract the predators of the pest towards them, thereby reducing the population of pests.

Amazingly, when a tree ‘knows’ it’s dying it will do everything it can to aid the survival of the trees around it. Researchers noted that as an injured tree was dying, it sent all of its carbon down through its roots into the mycorrhizal network so that it could be absorbed by neighbouring trees. In doing so, these neighbouring trees were strengthened.

The driving researcher behind this work, Suzanne Simard, found that trees will help each other out when they’re in a bit of shade. She used carbon -14 trackers to monitor the movement of carbon from one tree to another. She found that the trees that grew in more light would send more carbon to the trees in shade, allowing them to photosynthesise more and so helping them provide food for themselves. At times when one tree had lost its leaves and so couldn’t photosynthesise as much, more carbon was sent to it from evergreen trees.

This discovery could be used in the future to reduce the disastrous effects of deforestation. If loggers keep the network of fungi intact, with many of the oldest trees still present, new trees planted will be able to utilise and reuse carbon more efficiently thanks to the wood wide web.

Nanotechnology and its future in medicine – 07/09/18

Maya (Year 11), discusses the uses of nanotechnology in medicine, thinking about how far it has come and helped doctors. She also considers the dangerous aspects of using such small technology and the future benefits it may bring.

Technology in medicine has come far and with it the introduction of nanotechnology. Nanotechnology is the action of manipulating structures and properties at an atomic and molecular level as the technology is so small; it being one-billionth of a metre. This technology has many uses such as electronics, energy production and medicine and is useful in its diverse application. Nanotechnology is useful in medicine because of its size and how it interacts with biological molecules of the same proportion or larger. It is a valuable new tool that is being used for research and for combatting various diseases.

In medicine, nanotechnology is already being used in a wide variety of areas, the principle area being cancer treatment. In 2006 a report issued by NanoBiotech Pharma stated that developments related to nanotechnology would mostly be focused on cancer treatments. Thus, drugs such as Doxil, used to treat ovarian cancer will use nanotechnology to evade and surpass the possible effects of the immune system enabling drugs to be delivered to the disease-specific areas of the body. Nanotechnology is also helping in neuroscience where European researchers are currently using the technology to carry out electrical activity across dead brain tissue left behind by strokes and illnesses. The initial research was carried out to get a more in-depth analysis of the brain and to create more bio-compatible grids (a piece of technology that surgeons place in the brain to find where a seizure has taken place). Thus, it is more sophisticated than previous technologies which, when implanted, will not cause as much damage to existing brain tissue.

Beyond help in combatting cancer and research, nanotechnology is used in many areas in medicine from appetite control to medical tools, bone replacement and even hormone therapy. Nanotechnology is advancing all areas of medicine with Nano-sized particles enhancing new bone growth and additionally, there are even wound dressings that contain Nano-particles that allow for powerful microbial resistance. It is with these new developments that we are revolutionising the field of medicine, and with more advancements, we will be able to treat diseases as soon as they are detected.

Scientists are hoping that in the future nanotechnology can be used even further to stop chemotherapy altogether; fighting cancer by using gold and silica particles combined with nanotechnology to bind with the mutated cells in the body and then use infra-red lasers to heat up the gold particles and kill the tumour cells. This application would be beneficial as it would reduce the risk of surrounding cells being damaged as the laser would not affect them as much as the chemotherapy would.

In other areas, nanotechnology is further developing with diagnostics and medical data collection. This means that by using this technology, doctors would be able to look for the damaged genes that are associated with particular cancers and screen the tumour tissue faster and earlier than before. This process involves the Nano-scale devices being distributed through the body to detect chemical changes. There is also an external scan by use of quantum dots on the DNA of a patient which is then sequenced to check if they carry a particular debilitating genome, therefore providing a quicker and easier method for doctors to check in detail if a patient has contracted any illnesses or diseases. Furthermore, doctors will be able to gain a further in-depth analysis and understanding of the body by use of nanotechnology which surpasses the information found from x-rays and scans.

While this is a great start for nanotechnology, there is still little known about how some of the technology might affect the body. Insoluble nanotechnology for example, could have a high risk of building up in organs as they cannot diffuse into the bloodstream. Or as the nanoparticles are so small, there is no controlling where they could go, which might lead to Nano-particles entering cells and even their nuclei, which could be very dangerous for the patient. The science and technology committee from the House of Lords have reported concerns about nanotechnology on human health, stating that sufficient research has not been conducted on “understanding the behaviour and toxicology of nanomaterials” and it has not been given enough priority especially with the speed at which nanotechnology is being produced.

Nanotechnology is advancing medical treatment at a rapid rate, with new innovative technologies approved each year to help combat illnesses and diseases. Whilst more research needs to be conducted, the application of Nano-medicine will provide a platform of projected benefits that has potential to be valuable. Overall with the great burden that conditions like cancer, Alzheimer’s, HIV and cardiovascular diseases impose on the current healthcare systems, nano-technology will revolutionise healthcare with its advances techniques in the future as it progresses.

@Biology_WHS 

Critical Thinking: “the important thing is not to stop questioning.” – Albert Einstein

Richard Gale, teacher of Biology at WHS, looks at the value of critical thinking and how we can use this to help make logical and well-structured arguments.

At some point we all accept a fact or an opinion without challenging it, especially if we deem the person telling us the fact or opinion to be in a position of authority.

Challenging or questioning these people can seem daunting and rude, or at worst we could appear ignorant or stupid. However, if we never challenged or questioned ideas or perceived facts then the world would still be considered to be flat, and we would not have the theories of relativity or evolution.

This is what Einstein is getting at, that all ideas and preconceived facts should be questioned otherwise society will stagnate and no longer advance in any field of study. This process of constantly asking questions and challenging ideas is known as critical thinking.

It is said that someone who is a critical thinker will identify, analyse, evaluate and solve problems systematically rather than by intuition or instinct; almost a list of higher order thinking skills from Bloom’s taxonomy. The reason for placing critical thinking as a key higher order skill is because, as Paul and Elder (2007) noted “much of our thinking, left to itself, is biased, distorted, partial, uninformed or down-right prejudiced.  Yet the quality of our life and that of which we produce, make, or build depends precisely on the quality of our thought.”

In essence, critical thinking requires you to use your ability to reason. It is about being an active learner rather than a passive recipient of information by asking questions to understand the links that exist between different topics. It requires learners to weigh up and determine the importance and relevance of evidence and arguments, identifying arguments that are weak and those that are stronger; to build and appraise their own arguments, identify inconsistences and errors in arguments and reasoning, doing all of this in a systemic and consistent way. Then they should reflect on the justification of their own assumptions, beliefs and values. As Aristotle put it “it is the mark of an educated mind to be able to entertain a thought without accepting it.”

Critical thinkers rigorously question ideas and assumptions rather than accepting them at face value. They will always seek to determine whether the ideas, arguments and findings represent the entire picture and are open to finding that they do not. In principle anyone stating a fact or an opinion, and I am definitely including myself here as a teacher, should be able to reason why they hold that fact or opinion when asked questions and should be able to convince a class or an individual that those ideas have merit. Equally, as I know my pupils would attest too, pupils should be able to reason why they hold their opinions or ideas when questioned. Whilst this may seem daunting and at times a bit cruel, being able to think critically has become a very important skill with the onset of the new A levels.

In Biology, under the reformed linear A level, there has been in increase in the percent of marks awarded for higher order thinking skills, termed A02 and A03. A02 is to ‘apply knowledge and understanding of scientific ideas, processes, techniques and procedures’ whereas A03 is ‘analyse, interpret and evaluate scientific information, ideas and evidence, including in relation to issues.’ This is weighted between 40-45% of marks for A02 and 25-30% for A03 skills of the overall percentage across the three papers. The pupils taking the exams are expected to critically interpret data and theories, as well as analysing and interpreting the information they have learnt in completely novel situations. The following quote from Carl Segan is now more significant as knowing facts is no longer enough for pupils to succeed: “knowing a great deal is not the same as being smart; intelligence is not information alone but also judgment, the manner in which information is collected and used.”

Thankfully, we can develop and train ourselves – and others – to be critical thinkers. There are a plethora of guides and talks on how to we can develop our skills as critical thinkers, and choosing which one is most useful is tricky and to an extent futile as they all repeat the same basic principles but with different language and animations. I have tried to summarise these as follows:

  1. Always ask questions of the fact or information provided and keep going until you are satisfied that the idea has been explained fully.
  2. Evaluate the evidence given to support the idea or fact; often miss-conceptions are based on poor data or interpretations. What is the motive of the source of the information, is there bias present? Do your research and find the arguments for and against, which is stronger and why?
  3. Finally, do not assume you are right, remember we ourselves have bias and we should challenge our own assumptions. What is my truth? What are the truths of others?

We can practise these skills when we are in any lesson or lecture, as well as when we are reading, to help develop a deeper understanding of a text. Evaluating an argument requires us to think if the argument is supported by enough strong evidence.

Critical thinking skills can be practised at home and in everyday life by asking people to provide a reason for a statement. This can be done as they make it or by playing games, such as you have to swap three items you current have for three things you want, and then rationalising each choice. You can even engage in a bit of family desert island discs, taking it in turn to practise your Socratitic questioning (treat each answer with a follow up question).

There are a few pitfalls to consider when engaging with critical thinking; the first of these is ignorant certainty. This is the belief that there are definite correct answers to all questions. Remember that all current ideas and facts our just our best interpretation of the best information or data we currently have to hand and all of them are subject to re-evaluation and questioning. The next one is more relevant to critical thinking and is naïve relativism – the belief that all arguments are equal. While we should consider all arguments we cannot forget that some arguments are stronger than others and some are indeed wrong. Even Isaac Newton, genius that he was, believed that alchemy was a legitimate pursuit.

Critical thinking is not easy; you have to be prepared to let go of your own beliefs and accept new information. Doing so is uncomfortable, as we base ourselves on our beliefs but ultimately it is interesting and rewarding. As you explore your own beliefs and those of others through questioning, evaluating, researching and reviewing, know that this is enriching your ability to form arguments and enhancing your opinions and thoughts. You do not know what you will discover and where your adventure will take you, but it will take you nearer to the truth, whatever that might be. Whilst on your journey of lifelong learning remember to “think for yourselves and let others enjoy the privilege to do so, too” (Voltaire).

Follow @STEAM_WHS and @Biology_WHS on Twitter.

Hotspotting: the conservation strategy to save our wildlife?

Globe

Alex (Year 11) investigates whether the strategy of hotspot conservation is beneficial to reducing mass extinction rates, or if this strategy is not all it claims to be.

Back in 2007, Professor Norman Myers was named the Time Magazine Hero of the Environment for his work in conservation with relation to biodiversity hotspots. He first came up with his concept of hotspot conservation in 1988, when he expressed his fears that ‘the number of species threatened with extinction far outstrips available conservation resources’. The main idea was that he would identify hotspots for biodiversity around the world, concentrating conservation efforts there and saving the most species possible in this way.

Myers’ fears are even more relevant now than 30 years ago. According to scientific estimates, dozens of species are becoming extinct daily leading to the worst epidemic of extinction since the death of the dinosaurs, 65 million years ago. And this is not as naturally occurring as a giant meteor colliding with the Earth – 99% of the IUCN Red List of Threatened Species are at risk from human activities such as ocean pollution and loss of habitat due to deforestation amongst other things. It is therefore crucial that we act now to adopt a range of conservation strategies to give our ecosystems a chance at survival for future generations.

To become accepted as a hotspot, a region must meet two criteria: firstly it must contain a minimum of 1,500 endemic (native or restricted to a certain area) plant species, and secondly it must have lost at least 70% of its original vegetation. Following these rules, 35 areas around the world ranging from the Tropical Andes in South America to more than 7,100 islands in the Philippines and all of New Zealand and Madagascar, were identified as hotspots. These areas cover only 2.3% of Earth’s total land surface but contain more than 50% of the world’s endemic plant species and 43% of endemic terrestrial bird, mammal, reptile and amphibian species, making them crucial to the world’s biodiversity.

This concept has been hailed as a work of genius by conservationists and has consequently been adopted by many conservation agencies such as Conservation International – who believe that success in conserving these areas and their endemic species will have ‘an enormous impact in securing our global biodiversity’.

The principal barrier to all conservation efforts is funding, as buying territories and caring for them costs a lot of money, which is primarily raised from businesses, governments and individual donors. Most of this funding is raised through campaigns focused on charismatic megafauna such as the penguin or the snow leopard. These types of campaigns motivate people as they feel a closer connection to these animals and they seem to really be making a difference in conserving these species. When conservation is done on a larger, regional level, there is less of the gratification that comes along with donating money as there is less control, felt by the donors, over the work done for conservation. Through the identification of 35 specific areas to concentrate funds towards, this reconnects the public, as well as larger companies and local governmental bodies, to the projects, thereby encouraging more donations. It is for this reason that hotspot conservation has received £740 million, the largest amount ever assigned to a single conservation strategy.

Although the 35 areas identified are relatively widespread and well-funded for their conservation efforts, this strategy has been criticised for its neglect of other crucial ecosystems. First of all, there are no hotspots in northern Europe and many other areas around the world, neglecting many species of both flora and fauna. Also, as the criteria for classification as a hotspot are with reference to endemic plant species, many species of fauna are neglected, from insects to large and endangered species such as elephants, rhinos, bears, and wolves. Furthermore, areas referred to as ‘coldspots’ are ignored. This could lead to the collapse of entire ecosystems following the extinction of key species.

Another major issue with this strategy is that terrestrial environments only make up around 29.2% of the earth’s surface area. The other 70.8% is covered in very diverse (but also very threatened) oceans and seas. Marine environments are overlooked by hotspot conservationists as they rarely have 1500 endemic plant species, as deep oceans with very little light are not the ideal environmental environment for plant growth, and species floating on the top are rarely confined to one specific area, making them not endemic.

So, if even the more successful strategies for conservation are so flawed, is there any hope for the future? I think that yes, there is. Although there is no way to save all the species on earth, identifying crucially important areas to concentrate our efforts on is essential to modern conservation efforts. Hotspot conservation is definitely improving the ecological situation in these 35 areas and so those efforts should be continued, but that doesn’t mean that all conservation efforts should be focussed only on these hotspots. Hotspot conservation should be part of the overall strategy for reduction of mass extinction rates, but it is not the fix-all solution that some claim it is.

Follow @Geography_WHS & @EnviroRep_WHS on Twitter.

Crispr – How new gene editing technology will affect you.

Emma Ferraris in Year 12 gives us an insight into the new gene editing technology of the past 30 years and the potential it has for changing the world of medicine and how we view our species.

In the past 30 years editing of the human genome has evolved and improved leaps and bounds, most notably with the discovery of the new technology Crispr, which, since 2012 has been available and used by scientists to manipulate the genome.

Crispr in itself is a short section of repeated DNA found in the genomes of bacteria and other microorganisms, but when coupled with an enzyme such as Cas-9, the technology enables geneticists to edit part of the human genome by cutting sections at a specific place and removing or adding new strings of DNA. This uses RNA which acts as a marker to ensure the Cas-9 enzyme edits the sequence in the right place.

gene

As a result this technology is the most precise and versatile method of manipulating genetics to date and moreover can be purchased for around $60 – far cheaper than any other known method for DNA splicing. Hence unsurprisingly there has been much buzz and scruple around the new technology in the scientific world.

So, what will this gene editing mean for the future of medicine? And how will this affect you?

There is a whole host of possible uses for this new technology, including its capacity for combatting diseases, viruses and mutations in humans, as well as its ability to edit the genome of specific cells in the body.

On a small scale Crispr technology has been used successfully to edit the HIV virus out of the cells of rats. In 2016, Kamel Khalili, director of the Comprehensive NeuroAIDS Center at Temple University, managed to edit out around 50% of the HIV virus that was present in 99% of the rats cells. This success rate seems very hopeful for the future of the removal of this virus from both animals and humans, indeed Khalili himself commented that “CRISPR may be more convenient for gene editing than the prior gene editing tools used.” However, this is only one step, albeit an important one, in the process of using Crispr to actually edit out the virus from a human patient’s cells.

On a larger scale Crispr has, in the past year, been used in immunotherapy to treat certain cancers. Using Crispr, Michel Sadelain, of Memorial Sloan Kettering Cancer Center, was able to remove T-cells (a certain type of white blood cell which play a part in the immune system) from the blood and edit them using Crispr so that they were better able to recognise the antigens (individualised markers) on cancerous cells. As a result the T-cells could locate and destroy the tumour much more easily and with greater effect.

Similarly in Pennsylvania this month, scientists began an experiment in not only making the T-cells better able to locate the mutating cells, but also edit out two of the genes in the immune cell that mean they are better able to actually attack the tumour. This ex vivo (out of glass) therapy is also a lot safer than injection of Crispr straight into the blood as this sometimes causes a negative immune reaction.

Possibly most like a science-fiction plot, Crispr also has the capacity in the future to edit human genes and indeed the DNA in reproductive cells so that a new breed of eugenics could be on the horizon.

James J. Lee, a researcher at University of Minnesota said “In my opinion, CRISPR could in principle be used to boost the expected intelligence of an embryo by a considerable amount.” This theory is both exciting but has also unsurprisingly sparked many bioethnic debates in recent years, especially after a study in China 2015 used Crispr to edit a human embryo.

Whether or not you agree with the ethics, the prospects of designer babies and the perfect genetically engineered human soldier, which were once merely fictional, suddenly seem like a possible reality if the editing of human embryos continues to improve.

Without a doubt there are many benefits of this technology which in the next decades will become increasingly used by the biomedical field in treatments for diseases and viruses in humans and animals. The potentially unethical and dehumanising effects of DNA editing are much more obscure, so it is the future generation’s responsibility to ensure the use of Crispr remains purely in the interest of scientific improvement.

Follow the WHS Biology department on Twitter: @Biology_WHS

Engineering – Take a closer look

Alex Farrer, one of our Scientists in Residence, looks at the value of science capital and the potential that this can have on future careers in the sciences.

Engineering 2018

2018 is the Year of Engineering – a government campaign to support the engineering profession in recruiting tomorrow’s engineers. Over the last 30 years efforts to attract girls and women into engineering have been unsuccessful. Currently less than 1 in 8 of the engineering workforce is female; boys are 3.5 times more likely to study A level Physics than girls; and boys are five times more likely to gain an engineering and technology degree (Engineering UK 2017).

Our STEAM focus at Wimbledon High provides insights into a variety of opportunities in engineering and in related areas such as design, sports, medicine and computer science. Through STEAM we strive to broaden what counts as science and help build the skills that future employers will value highly such as communication, problem solving and adaptability. We aim to encourage all pupils from Reception to Year 13 to think that STEAM is relevant and important to their lives, both now and in the future, and aim to build their science capital.

A national survey of young people aged between 11 and 15 found that 5% had a high level of science capital (ASPIRES projects).

Professor Louise Archer from UCL Institute of Education, directs the ASPIRES projects and has developed the concept of science capital which refers to someone’s science related qualifications, understanding, knowledge, interests, attitudes and contacts.

The Science Capital Teaching Approach aims to build on the existing science capital of pupils, encourage engagement with science and promote social justice.

If you have a high science capital you might:

  • watch scientific TV programmes
  • have science qualifications
  • enjoy reading popular science books
  • have friends and relatives that work in science and engineering professions
  • visit science museums and fairs
  • engage in science related hobbies or activities
  • talk about science and engineering news topics with people you know

The evidence from this research project shows that the more science capital a pupil has the more they will aspire to continue with sciences post-16 and see science and engineering as fulfilling roles.

Below are some suggestions that schools could consider to build the science capital of pupils and adults in their communities so that everyone sees science and engineering as something of value.

  1. Host a family STEAM challenge event. This will help to encourage science talk with family members and show that STEAM is for everyone in the school community.
  2. Encourage science and engineering activities to “pop up” in the playground. Pupils, parents or staff could run the activities and the high visibility will encourage all members of the school community to get involved.
  3. Celebrate interest in scientific TV programmes and films. For example show a screening of a film like Hidden Figures with scientists or historians on hand to answer any questions, or encourage staff and pupils to talk about the science on TV they have seen.
  4. Signpost STEAM books, magazines and events to staff and pupils. An example is Itch by Simon Mayo, which contains a great deal of chemistry, and there are also some excellent science magazines such as Whizz Pop Bang and BBC Focus that can be linked to lesson content.
  5. Think about ways to get families talking about STEAM homework that is set. Linking tasks to science or technology in the news will encourage talk as will setting tasks where help from adults is very much encouraged such as making a marble run, growing a mystery seed or taking a STEAM photograph.
  6. Find out the sorts of science interests, hobbies, and expertise pupils and their families have so that lessons and assemblies can be personalised. Setting a “Science and me” homework will heWHS Gymnasticlp to discover how many parents and pupils you have in your class with scientific interests and skills.
  7. Elicit and value the wider links that pupils have to science and engineering and draw upon them in lessons. For example using the experience of a gymnast in your class in a physics lesson will enable pupils to broaden what they thinks counts as science in their life.
  8.  Invite scientists and engineers that pupils will relate to into lessons and encourage them to talk about the skills and attributes they use. This could be a parent who uses STEAM skills in their job, a STEM Ambassador or someone who has relevant interest and knowledge. Even better if the scientist or engineer visits a lesson other than science! @STEMAmbassadors

Science lesson Wimbledon

If you are a primary teacher and would like to find out more about how you can build science capital in your school we will be hosting a Science Capital Workshop on February 7th 1.30-3.30pm. Please contact joanna.sandys@wim.gdst.net if you would like to come along.

If any parents with STEAM expertise would enjoy sharing some of their knowledge, skills and insights with our pupils please do let antonia.jolly@wim.gdst.net know and we will be in touch.

We look forward to enriching the science capital of our community in this exciting Year of Engineering as our STEAM journey continues.

Follow @STEAM_WHS on Twitter – #YoE

Behind Closed Doors: The secret worlds within us…

By Rahi Patel, Year 12.

Have you ever wondered where the common phrase ‘gut feeling’ stems from or how this meandering smooth tissue could be related to such complexities as emotions?

For centuries Aryuvedic medicine (An ancient Indian branch of medicine) has regarded the gut as the centre of our wellbeing; however, in modern medical practice this once revered organ has been pushed to the side in order to make way for the big players: the brain and the heart. However, recent developments in the medical field are beginning to reveal evidence to prove this ancient theory: showing us that our ‘gut feelings’ truly are the most significant.

In order to understand this rather counter-intuitive principle we must first establish the functions of our brain; it is the centre of conception and movement, the organ that led us to the discovery of electricity, and the organ helps us to coordinate the complexities of standing for Mrs Lunnon in a Monday morning assembly. Although we have created a strong link between the term ‘self’ and our brains we can see through the exploration of this underrated organ, the gut, that there may be more to ‘ourselves’ than what lies behind the eyes.

Our guts possess a multitude of nerves found uniquely in this part of the body. This immediately poses the question of why such an elaborate and complicated system would be needed if the sole purpose of the gut were to create a pleasant journey for food to move from the mouth through to the colon, accompanied by the occasional release of gaseous sounds?

So the scientists amongst us must all be wondering where the evidence is to support these new claims. Well, several experiments have been conducted around the world highlighting the importance of our gut with regard to mental well-being.

The ‘forced swimming test’ is a commonly used experiment to assess the effectiveness of antidepressants. A mouse is placed in a basin of water too deep for it to stand in, so it is forced to swim; mice with depressive tendencies give up swiftly, however, the ones with greater motivation persevere. Most antidepressants tend to increase the time that a mouse swims for, before giving up. One scientist, John Cryan decided to feed half the mice with Lactobacillus Rhamnosus: a bacterium widely known to be beneficial for gut health. Impressively, the mice with enhanced gut health not only swum for longer, but their blood also contained significantly less stress hormones.

Cooperation between the gut and the brain, via the vagus nerve, is thus proving to be a promising field for the curing of mental disorders and diseases. The gut is our largest sensory organ, so it would only make sense for the brain to form a close relationship with it, to create a detailed image of the state of our bodies, given ‘knowledge is power’. This understanding is helping to shed a light on complex neurological diseases, such as depression, as scientists are now aware that there is more to the ‘self’ than the brain, questioning the philosophical proposition of ‘I think therefore I am’…maybe we should adapt this to ‘I eat, then I think, therefore I am’.

Lining the labyrinth of organs known as the gut are approximately 100 trillion bacteria (weighing 2kg) eagerly waiting to help us break down and assimilate the billions of particles that enter our bodies each day. They also help to produce new vitamins, for example sauerkraut is significantly higher in vitamins than plain cabbage.

Not only do our bacteria increase nutrient values in our food they also advise us on the foods that we should be eating – a perplexing idea I know! But what we eat is a matter of life and death for our friendly cohabitants, so it only makes sense for them to influence our choices. In order to trigger a craving the brain must be accessed, which is a tough feat considering the armour-like meninges and the blood – brain barrier. Bacteria can synthesise molecules small enough to access our brains, such as the amino acids, tyrosine and tryptophan, which are converted into dopamine and serotonin within the brain. Of course this is not the only way in which cravings materialise, but it is far easier to control our brain with bacteria than with genes, which may help to pave the future of treatments for diseases such as hypertension and diabetes.

So next time you wonder why you’re craving a tortilla or your favourite brie, just eat it, since 95% of serotonin (the happiness hormone)  is produced by the gut and we all now know the significance of ‘gut feelings’ on our well-being!