The Bends

Written by: Antonia Beevor

Humans have always explored – this is a fact that has remained unchanged throughout history. And as our technology has adapted, this has allowed us to explore places that have never been seen by people, including the depths of the ocean.

Over seventy percent of the Earth’s surface is covered in water, and it represents a vast and unknown landscape for many. The human body can adapt to being deep underwater (as proven by the deepest free dive of 214 meters) thanks to very well-designed biology. Your Eustachian tubes – which connect your throat and nose to the middle ear – allow divers to equalise the internal pressure of their ears with the pressure underwater.

But holding your breath can only take you so deep, and for years, we have been trying to find a way to dive deeper and longer. We can trace these attempts as far back as 332BC, where Alexander the Great was lowered into the ocean in a ‘diving bell’. Later, during the Renaissance, DaVinci designed his own underwater breathing apparatus, made of tubes connected to an air source near the surface. But it was only around the 18th century that the first self-contained underwater breathing apparatus (SCUBA) was created by Henry Fleuss.

But for over 50 years, one of the great mysteries for divers was the bends. It was referred to as such because the joint pain experienced led to the afflicted bending over, but these days it’s more commonly called decompression sickness. In addition to joint pain, decompression sickness can manifest as a skin rash, amnesia, vertigo and many other symptoms.

Divers usually breathe compressed air when diving- a mix of 21% oxygen and 79% nitrogen. When you breathe out on land, the nitrogen is also exhaled. But underwater, the gas you breathe is under pressure, so instead of being breathed out, nitrogen begins to be forced into body tissue. When in its tissue, the nitrogen is dissolved, but when a diver ascends too quickly, the pressure exerted on the diver’s body changes rapidly, causing this nitrogen to come out of solution and form bubbles in the blood.

These bubbles can form anywhere in the body, and the varied symptoms of decompression sickness show this. The most effective way to treat decompression sickness is recompression therapy. Patients are placed in recompression chambers, where the pressure is increased, allowing the nitrogen bubbles to redissolve, where it can then be harmlessly respired.

Decompression sickness can be avoided by limiting the amount of nitrogen that dissolves during a dive. This can be done by limiting the depth and length of dives, ascending slowly, and stopping at points during ascent, which allows the excess nitrogen to escape.

Despite the risks associated with diving, the innovation and ingenuity that has taken us from diving bells to equipment that allows divers to spend hours underwater is a testament to humanity’s desire to discover.

Introduction of a new specification

Mrs Nicola Cooper, Teacher of Biology, looks at how the introduction of a new specification can provide an invaluable opportunity to reassess outcomes

I am a self-confessed Biology geek and my love for the subject knows no bounds. Its breadth, its relevance and the sheer beauty of the complexity that can arise from a few simple components is endlessly fascinating to me. Moreover, I love teaching Biology. Sharing my passion for the living world is energising and is a wonderful way of connecting in new ways with the key ideas and concepts that underpin the living world.

It has therefore been a source of frustration that over the years I have encountered many people whose experience of learning Biology at school is a negative one. A not-uncommon view seems to be that whilst many people have an innate interest in learning about the living world and our place within it, there is a perception that the study of Biology is characterised by mindless rote learning of a seemingly endless body of ‘facts’. If this perception is then reinforced by teaching that is built around imparting knowledge, then no wonder much of the joy, excitement and inspiration is lost.

This is something we are very aware of at Wimbledon High School and as a department we work hard to encourage our students away from rote learning towards a deep understanding of key concepts that they can then apply in a wide range of contexts.

So, what does this look like in practice? Well, this year we have been given the opportunity to think much more deliberately about this question, with the introduction of the new GCSE specification. In devising new schemes of work, we began by challenging ourselves to think expansively (during a wonderfully lively brainstorming session) with the question ‘What outcomes do we want for our year 9 students?’ What emerged from that discussion was not a long list of ‘facts’ that we want our students to be able to recall but rather three key themes;

  • A sense of wonder about the living world
  • A questioning approach
  • An ability to solve their own problems.

These have been our guiding principles when planning lessons for Year 9 (the first cohort following the new course). We have deliberately chosen not to cover topics in a linear way but have (quite literally) cut up the specification and rearranged topics so that central concepts can be explicitly linked with related contexts. The aim being, right from the start of the course, to model how knowing and understanding a few key ideas can allow students to pose and then answer their own questions.


Drawing onion cells from a photo taken down a microscope

In our opening topic of health and disease we start each lesson with a question such as ‘What happens when you get ill?’ and ‘Is being healthy the same as being ill?’. We have looked at medieval views on health and disease and linked our discussions to very recent experiences of the Covid 19 pandemic. We are also using the context of communicable diseases to explore the key concept of cell structure and function. Encouragingly, our students have responded very positively and there has been a definite buzz and the fizz of excitement in my Year 9 lessons.

Zoe in year 9 said, “I found today’s lesson really helpful. I think we all gained an important biological skill that we will use throughout Biology”, while Penelope (year 9) said “I found it very interesting and rewarding, especially because we got to set up the experiment ourselves”.

From a teaching perspective it has been stimulating and refreshing to be reminded of our purpose and as a department we are excited to see how the students continue to develop and flourish as they move through the rest of the year and on to their further studies In Biology.

Does time really fly when you’re having fun?

Taking a cue from Henri Bergson’s theory of time, Hafsa in Year 10 examines the science behind our sense that time speeds up when we are enjoying ourselves

Time is the most used noun in the English language and yet humans are still struggling to define it, with its complicated breadth and many interdimensional theories. We have all lived through the physical fractions of time like the incessant ticking of the second hand or the gradual change in season, however, do we experience it in this form? This is a question that requires the tools of both philosophy and science in order to reach a conclusion.

In scientific terms, time can be defined as ‘The progression of events from the past to the present into the future’. In other words, it can be seen as made up of the seconds, minutes, and hours that we observe in our day-to-day life. Think of time as a one directional arrow, it cannot be travelled across or reversed but only and forever moves forward.

One philosophical theory would challenge such a definition of time. In the earliest part of the 20th century, the renowned philosopher Henri Bergson published his doctoral thesis, ‘Time and Free Will: An Essay on the Immediate Data of Consciousness’, in which he explored his theory that humans experience time differently from this outwardly measurable sense. He suggested that as humans we divide time into separate spatial constructs such as seconds and minutes but do not really experience it in this form. If Bergson’s theory is right, our sense of time is really much more fluid than the scientific definition above suggests.

Image from www.pexels.com

If we work from the inside out, we can explore the different areas of our lives which influence our perception of time. The first area is the biological make-up of our bodies. We all have circadian rhythms which are physical, mental, and behavioural changes that follow a twenty-four-hour cycle. This rhythm is present in most living things and is most commonly responsible for determining when we sleep and when we are awake.

These internal body clocks vary from person to person, some running slightly longer than twenty-four hours and some slightly less. Consequently, everyone’s internal sense of time differs, from when people fall asleep or wake up to how people feel at different points during the day.

But knowing that humans have slight differences in their circadian rhythms doesn’t fully explain how our sense of time differs from the scientific definition. After all, these circadian rhythms still follow a twenty-four-hour cycle just like a clock. If we look at the wider picture, what is going on around us greatly affects our sense of time. In other words, our circadian rhythms are subject to external stimuli.

Imagine you are doing something you love, completely engrossed in the activity, whether it be an art, a science, or just a leisurely pastime. You look at the clock after what feels like two minutes and realise that twenty have actually passed. The activity acts as the external stimulus and greatly affects your perception of time.

When engrossed in an activity you enjoy, your mind is fully focussed on it, meaning there is no time for it to wander and look at the clock. Research suggests that the pleasurable event boosts dopamine release which causes your circadian rhythm to run faster. Let’s take an interval of five minutes as a basis for this. In this interval, due to your internal body clock running faster you feel as though only two minutes have gone by; time feels like it has been contracted.

By contrast, when you are bored, less dopamine is released, slowing your circadian rhythm, meaning your subjective sense of time runs slower. If we use the same example, in an interval of five minutes, you feel as though ten minutes have gone by and time feels elongated. This biological process has the power to shape and fluidify our perception of time.

So, the next time someone says ‘Wow, time really does fly by when you’re having fun,’ remember that there is much more science and philosophy behind the phrase than they might realise!

Sources

https://www.livescience.com/64901-time-fly-having-fun.html

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6042233/

Visit to the Francis Crick Institute

Grace S, Year 13 Student, writes about the recent Biology trip to visit the Francis Crick Institute.

WARNING This article will include mentions in a biomedical sense to some topics which some readers may find disturbing, including death, cancer and animal testing.

Last Friday some of the Biology A-level students were privileged enough to go on a trip to the Francis Crick Institute. All sorts of biomedical research goes on inside the Institute, but we went with a focus on looking at the studies into cancer. During our day, we visited the ‘Outwitting Cancer’ exhibition to find out more about the research projects and clinical trials that the Crick Institute is running; we had a backstage tour of the Swanton Laboratory to learn about the genetic codes of tumours and find out more about immunotherapy and I attended a discussion on the impact pollution can have on the lungs.

We started our trip by visiting the public exhibition ‘Outwitting Cancer’, with Dr Swanton as our tour guide. We first walked through a film showing how tumours divide and spread using representations from the natural and man-made world. This film also showed that tumours are made up of cancer cells and T-cells (cells involved in the immune response) trying to regulate the growth of the cancer cells. We then moved through to an area where several clips were playing outlining the different projects underway at the Crick Institute in regards to cancer. There were many different projects on display about different clinical trials and research projects looking into understanding and fighting cancer, but the one which fascinated me the most involved growing organoids (otherwise known as mini-organs) from stem cells. The stem cells would be extracted from the patient and used to grow these organoids, which would then be used to see how they respond to different drugs. This would allow each treatment to be highly specified to the patient, and so perhaps lead to higher survival rates among these patients. In this same section of the exhibition there was a rainbow semi-circle of ribbons with stories clipped to these ribbons written by visitors to the exhibit of their experiences with cancer, ranging from those who have a lived experience, to those who are simply curious to learn more. It was a fantastic exhibit and I recommend you give it a visit yourself, it’s free!

As interesting as this exhibit was, for us the highlight was a backstage tour of the Swanton Laboratory followed by talks from members of the team working there. We learnt that they have found that there is homogeneity within tumours, a fact that was not known just a few years ago. What this means is that different sections of tumours have completely different genetic codes. This could significantly change the way which tumours are analysed and treatments are prescribed. Previously, one tumour sample was thought to be representative of the whole tumour, it is now known that this is not the case and multiple samples from different sections of the tumour should be taken to get a comprehensive view of its structure and how best it could be treated. Linked to this, one member of the team, a final year PhD student, showed us graphics which they had been able to take and colour of the different cells present in a tumour. One of the main reasons cancer develops to the point where treatment is needed is because the body’s immune system has failed to neutralise the cancer cells, they were working to find out why this may be. In one of the graphics shown to us, a different type of immune cell had actually formed a wall around the T-cells, preventing them from reaching the cancer cells in order to eliminate them.  This would be important knowledge when considering immunotherapy treatments, which encourage the body’s own immune system to fight back against the cancer. In this case there would be little benefit to injecting or strengthening T-cells, as they would not be able to reach the cancer cells. Immunotherapy itself is still a relatively recent invention, and it is still considered only after treatments such as chemotherapy have not been effective. By this stage the cancer is more advanced and much harder to treat with immunotherapy, so it is hoped that in the future immunotherapy will be considered before more generalised treatments such as chemotherapy.

Work is also being done to understand late-stage cancer. We were allowed into one of the stations where practical work is done (wearing red lab coats to indicate that we were visitors) and shown a series of slides showing where biopsies (a biopsy is the removal of a tissue sample) might be taken from a tumour. It was explained to us that TRACERx (the name of the project being undertaken in the Swanton Laboratory) had set up a programme where people living with late-stage cancer can consent to their tumours being used for post-mortem research. Often these individuals had also signed up for earlier programmes, so information on their cancer at earlier stages was available and it was possible to see how the cancer had progressed. It was also explained to us several of the methods used to store samples, including the use of dry ice (solid carbon dioxide) and liquid nitrogen.

The final presentation I attended (we were on a carousel in small groups) discussed the influence of pollution on lung cancer. It had previously been found that as we age, the number of mutations we have grows, so clearly mutations are not the sole cause of cancer as not everyone develops cancer. It has now been theorised that carcinogens, such as particle matter found in air pollution, activate these pre-existing mutations. Currently non-smokers comprise 14% of all lung cancer cases in the UK, as the number of smokers drops as people become more aware of the dangers of smoking, the proportion of people with lung cancer who are non-smokers will increase, making research as to what may cause this lung cancer even more important. Lung cancer in non-smokers is currently the 8th largest cause of death to cancer in the UK. Two on-going experiments are studying the effect of exposure to pollution on mutations in the lungs. One is being run within the Institute, exposing mice to pollution, and another in Canada, where human volunteers are exposed to the level of pollution average in Beijing for two hours. Whilst it is unlikely that this exposure will lead to new mutations, it may cause changes in those already present. All of the research projects presented to us are ongoing, and it really was a privilege to see what sort of work is going on behind the scenes.

All of us were incredibly lucky to be able to go on this trip and meet some of the scientists working on such fascinating projects within the Francis Crick Institute. Most of us were biologically-minded anyway, but were we not, this trip certainly would have swayed us.

Antimicrobials in agriculture: a short introduction

Leah in Year 12 explores why the use of antimicrobials in commercial livestock agricultural medicine is such a contested issue.

The World Health Organization (WHO) has declared Antimicrobial Resistance (AMR) as one of the Top 10 public health threats facing humanity[1]. While the use of antimicrobials has proved a huge boost to the livestock industry, in terms of faster growth in animals, the concern is that higher concentration of AMR in the food chain risks diminishing how effective antimicrobials will be in the human population.

Antimicrobials are agents used to kill unwanted microorganisms by targeting the ones that may pose a threat to humans and animals. These include antibiotics and antibacterial medication for bacterial infections, antifungal medication for fungal infections and antiparasitic medication for parasites.   

Resistance, or AMR, develops when antimicrobials are not used for their full course, so the weakest strains of bacteria are killed but the strongest ones survive. This means that the strongest strains then have the chance to adapt to the environment. Over time these variant alleles become more common in the population and the antimicrobials will become ineffective.  

As Shown in the graph below[2]from 2015 to 2019, antimicrobial use in farming has actually decreased by 45% but since then has picked up again despite the issue becoming more widespread.

Antimicrobials are used often in the production of both meat and animal product farming such as dairy cows, with two main objectives: promoting growth and preventing disease.  

The prevention of diseases in livestock are less of a contested issue as it is well understood that livestock share living spaces, food, and water. Generally, if one animal contracts a disease then the whole flock is at risk due to the proximity. Antimicrobials can help break this link.  

However, the WHO has a strong viewpoint with antimicrobials as a growth agent.[3]As stated in their guidelines to reducing antimicrobial resistance, the organization believes that ‘antimicrobials should not be used in the absence of disease in animals.’[4]This happens by helping convert the feed that the farmers are giving to their livestock into muscle to cause more rapid growth. The quicker the animal meets slaughter weight the quicker they can send them off to an abattoir and get the profit. For example, a 44kg lamb produces 25kg of meat which is the heaviest a lamb can be so farmers want their lambs to reach 44kg so that they can get the most money from each lamb.  

Image via Pixabay

Over 400,000 people die each year from foodborne diseases each year. With the rise in antimicrobial resistance, these diseases will start to become untreatable. The most common pathogens transmitted through livestock and food to humans are non-typhoidal Salmonella (NTS), Campylobacter, and toxigenic Escherichia coli.  Livestock contain all of these pathogens and so they can spread easily.

The WHO have been addressing AMR since 2001 and are advocating for it to become a more acknowledged issue. In some countries, the use of antimicrobials is already controlled. The US Food and Drugs Association (FDA) has been acting on this matter since 2014 due to the risk on human health.

Antimicrobial Resistance is a contested issue because, as much as AMR is a problem that has a variety of governing bodies backing it, there will always be the point that farmers rely on their livestock for their livelihoods meaning they are often driven by profit to ensure income. Antimicrobial Resistance has always been hard to collect evidence on, so this makes it harder to prove to these farmers that it is a growing issue.


References


[1] World Health Organization, WHO guidelines on use of medically important antimicrobials in food producing animals, 7 November 2017, https://www.who.int/foodsafety/areas_work/antimicrobial-resistance/cia_guidelines/en/
Accessed:  24th April 2021

[2] World Health Organization, Antimicrobial Resistance in in the food chain, November 2017, https://www.who.int/foodsafety/areas_work/antimicrobial-resistance/amrfoodchain/en/
Accessed:  24th April 2021

[3] World Health Organization, Ten threats to global health in 2019, https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019 Accessed:  24th April 2021

[4] Farm Antibiotics – Presenting the Facts, Progress across UK farming, https://www.farmantibiotics.org/progress-updates/progress-across-farming/
Accessed:  24th April 2021

Impact study: the spread of imported disease in Australia and New Zealand

Sophia (Year 13) looks at how European colonialism spread disease to Australia and New Zealand.

Although the tragedies brought by actions of colonisers such as slavery, wars and other abysmal treatment of native populations caused many deaths, one of the biggest killers of this action was the introduction of new diseases to which natives had no immunity due to their previous isolation from the European invaders.

Image from Pexels

Between 1200 and 1500 Europe itself was suffering several pandemics due to the growth of unsanitary cities, creating the perfect environment for infection, and also increasing contact with the Old World, such as through Mongol and Turkish invasions, which exposed Europe to major disease outbreaks. For example, between 1346-51, the Black Death killed off about a third of Europe’s population. As relatively disease-hardened populations in Europe emerged from this, although local epidemics emerged after 1500, none were really as bad as the years before it, rather such epidemics after this date were in colonised nations. Here I will focus on the colonisation of Australia and New Zealand, with different native peoples (the Aborigines and the Maoris) and with different effects of diseases.

New Zealand

Imported diseases began to impact many Maori from the 1790s. These diseases were those such as viral dysentery, influenza, whooping cough, measles, typhoid, venereal diseases, and the various forms of tuberculosis. Missionaries and observers reported massive death rates and plummeting birth rates. However, unlike the Americas and Australia, there is a big chance that the deaths as a result of foreign disease are widely exaggerated.

Rather, such exaggeration labelled the Maori as a dying race (a view which persisted to 1930), which helped to project the British Empire into New Zealand in 1840. One of the reasons for which the effect of disease was probably the smallest was simply the distance from Europe to New Zealand; it was a natural quarantine. The trip took 4 months or more, meaning that the sick either died or recovered; either way they were often no longer infectious on arrival. Therefore, the most pernicious European diseases – malaria, bubonic plague, smallpox, yellow fever, typhus and cholera – did not manage to transfer to New Zealand.

Image by Dan Whitfield via Pexels

Another factor which fostered the gross magnification of the demise of the Maori was the comparison in birth rates; missionary families were extremely large – the fourteen couples who went to New Zealand before 1833 had 114 children. Therefore, it was easy to amplify the decline in Maori birth rates into something far more serious than it was. The population of Maori on contact with the Europeans are very unreliable and, in most cases, wild guesses, and also allow for the misjudgement of the effect of the disease. For example, one estimate for 1769 based upon archaeological science gives an estimated pre-contact birth rate of 37 per thousand per year, and a death rate of 39[1], obviously impossible given that it leaves the Maori population in the minus-thousands. However, more moderate calculations suggest an average decline of 0.3% per year between 1769 and 1858[2]. Therefore, although the Maori population somewhat suffered as a result of these diseases, there is a tendency to exaggerate this, to portray them as ‘weaker’ peoples, and a dying race, allowing for easier colonisation.

Australia

Although Australia was initially discovered by the Dutch, it was a fleet of British ships which arrived at Botany Bay in January 1788 to establish a penal colony[3].  European disease spread to areas of Australia, even before Europeans had reached those parts. For example, there was a smallpox epidemic near Sydney in 1789, wiping out around half of the Aborigines there.[4] 

Photo by Damon Hall from Pexels

Some historians claim that this was acquired through contact with Indonesian fishermen in the far north, which then spread, and others argue that it is likely that the outbreak was a deliberate act by British marines when they ran out of ammunition and needed to expand their settlement. Indeed, unfortunately colonial thinking at the time placed Europeans as the ‘superior race’; a book written by William Westgarth in 1864 on the colony of Victoria included: ‘the case of the Aborigines of Victoria confirms…it would seem almost an immutable law of nature that such inferior dark races should disappear’. Therefore, as with New Zealand, description of the natives as a ‘dying race’ was an important tool for colonisation, meaning purposeful introduction and spread of some diseases is not too hard to believe.

Smallpox spread between Aboriginal communities, reappearing in 1829-30; according to one record killing 40-60% of the Aboriginal population.[5]  In addition, during the mid-late 19th century, many Aborigines in southern Australia were forced to move to reserves; the nature of many of these institutions enabled disease to spread quickly and many even began to close down as their populations fell.

Conclusion

Although one must be wary of statistics about native mortality rates in both countries, given the European tendency to exacerbate the decline in native populations, it is fair to say that the decline in Aboriginal populations was much higher than that of the Maori in New Zealand, although wars also massively contributed to this.

While roughly 16.5% of the New Zealand population is Maori, only 3.3% of Australians are aboriginal, and it is safe to say that disease influenced this to some extent. So why was there such a difference between the effects of diseases in these countries, seemingly close together and both colonised by the British? A very large reason was smallpox; this was by far the biggest killer in Australia, but never reached New Zealand. The nature of the existing native communities was also important; there were 200-300 different Aboriginal nations in Australia, all with different languages, but the Maori were far more united, and often seen to be a more ‘advanced’ society, and therefore were never forcibly placed into reserves; which is where a lot of the spread of disease took place.

In addition, events in New Zealand occurred much later than Australia, after slavery had been outlawed, meaning there was a slightly more humanitarian approach, and there is less evidence to suggest purposeful extermination of the Maori. This is not to discount any injustices suffered by the Maori; indeed, many did die from European disease, and in both cases the native populations were treated appallingly and were alienated from their land.

The influence of European disease was overwhelmingly more powerful in Australia. However, one must approach statistics about the effect of disease on native peoples with caution, as Europeans tended to exaggerate this area to portray such peoples as ‘dying races’, a device often used to support colonisation.


Bibliography

Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991

James Belich, Making Peoples (New Zealand: Penguin Books), 1996

John H. Chambers, A Traveller’s History of New Zealand and the South Pacific Islands (Great Britain: Phoenix in association with the Windrush Press), 2003


[1] cited in Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991, 35

[2] Ibid, 56

[3] wikipedia cites Lewis, Balderstone and Bowan (2006), 25

[4] Judy Campbell, Invisible Invaders: Smallpox and other Diseases in Aboriginal Australia 1780-1880 (Australia: Melbourne University Press), 2002, 55

[5] wikipedia cites Richard Broome, Arriving (1984), 27

How are organoids going to change biomedical research?

Microscope

Kate in Year 13 explores how organoids are going to contribute to biomedical research. 

At the moment, biomedical research is almost exclusively carried out in animal models. Although this has led to a better understanding of many fundamental biological processes, it has left gaps in our understanding of human specific development. In addition to this, the variability of human individuals is in sharp contrast to inbred animal models, leading to a deficiency in our knowledge about population diversity.

These limitations have forced scientists to invent a new way of looking at and understanding how the human body works; their conclusions were organoids.

An Organoid (Wikipedia)

Organoids are a miniaturised and simplified version of an organ produced in vitro in 3D which shows realistic micro-anatomy. They originate from renewable tissue sources that self-organise in culture to acquire in vivo-like organ complexity. There are potentially as many types of organoids as there are different tissues and organs in the body. This provides many opportunities such as allowing scientists to study mechanisms of disease acting within human tissues, generating knowledge applicable to preclinical studies as well as being able to offer the possibility of studying human tissues at the same if not higher level of scientific scrutiny, reproducibility and depth of analysis that has been possible only with nonhuman model organisms.

Organoids are going to revolutionise drug discovery and accelerate the process of bringing much needed drugs to reality. Nowadays, the process averages around 20 years from conception to reality. This is a lengthy process mainly due to the fact that the pharmaceutical industry has relied on animal models and human cell lines that have little resemblance to normal or diseased tissue – possibly one of the reasons behind the high failure rate of clinical trials adding to the high cost of drug discovery – an average of $2 billion for each new drug that reaches the pharmacy.

Organoids can help this development by using human cells instead of animal cells due to the improved compatibility, making it quicker and more efficient. Organoids are also able to provide a better understanding of human development.

Organoid graph
Above: Uses of organoids from https://blog.crownbio.com/key-organoid-applications

The human brain, especially the neocortex (which is the part of the mammalian brain involved in higher-order brain functions such as sensory perception, cognition, spatial reasoning and language), has evolved to be disproportionally larger compared with that of other species. A better understanding of this species-dependant difference through brain organoids will help us gain more knowledge about the mechanisms that make humans unique, and may aid the translation of findings made in animal models into therapeutic strategies answering the question what makes humans human.

Organoids are the future of biomedical research providing the potential to study human development and model disease processes with the same scrutiny and depth of analysis customary for research with non-human model organisms. Resembling the complexity of the actual tissue or organ, patient derived human organoid studies will accelerate medical research and generate knowledge about human development which is going to dramatically change the way we are going to study biology in the future.

Invention through desperation – military medical advancements

Military

Jessica, Year 13, explores military medical advancements in recent conflicts, discussing their impact and whether the nature of war acts as an inspiration for innovation.

In 2001, the conflict in Afghanistan began, continuing until a majority of British troops withdrew in the final months of 2014. During these years, 6,386 British personnel were injured, with 28 fatalities, leaving the survival rate at 99.6%.

This was unheard of in previous wars and a major success story for military medicine. However, the injuries and trauma to the soldiers during this period of time increasingly involved haemorrhaging and amputations due to gunshot wounds and IEDs (also known as improvised explosive devices – a type of unconventional crude homemade bomb). These IEDs cause extensive blood loss which has been attributed to 50% of combat deaths since World War Two. In order for these soldiers to survive, a change had to be made in the form of military medicine to preserve life and limb. There are three major advancements in military trauma medicine which all arose from the need to problem-solve solutions to the new injuries personnel and the medics were now witnessing.

The first is haemostatic dressings. During the period of the Afghanistan conflict, two new dressings were developed: XSTAT and QuickClot powder which contain components such as fibrinogen and thrombin catalysing the natural coagulation response. XSTAT uses 92 medical sponges in a pocket-sized injector to pack an open wound and halt bleeding within fifteen seconds. XSTAT increases the chance of survival and holds pressure until the patient can reach a medical centre. They also contain a molecule which is visible on an X-ray to ensure all sponges are removed later to prevent infection.

Secondly, there was a development in the traditional tourniquet. A tourniquet is a constricting or compressing device used to control venous and arterial blood flow to a portion of an extremity for a period of time. This is possible because it creates pressure equal to or higher than the patient’s systolic blood pressure. The single hand tie tourniquet is a development from the original tourniquet used by army medics which had to be applied by the medic and thus were only carried by them. Without the patient being able to apply their own tourniquet, crucial time and blood was lost whilst the medic reached the injured individual, reducing their chance of survival as well as increasing the complexity of their treatment and injuries. This is when the Clinical Application Tourniquet (CAT) was developed and introduced into the US Army in 2005. It was the first single-hand tie tourniquet, allowing the soldiers to treat their own injuries immediately until the medic could attend and provide more advanced care. The tourniquet distributes pressure over a greater area which is advantageous because it reduces the underlying tissue and nerve damage, preventing it from becoming ischemic, a deficient supply of blood, whilst remaining effective. This decrease in time before a tourniquet is used has decreased the mortality rate due to haemorrhaging by 85%.

A third category of advancements is in the use of blood and the way it is transported. Blood and blood products, such as platelets, are crucial in the treatment of haemorrhaging and amputations. However, in order for it to be viable for transfusion, it must be maintained in a cool, constant environment, far from the natural one in Afghanistan. This was previously a significant disadvantage and contributed to the low survival rates for haemorrhaging but improved with the development of the blood container. The Golden-Hour mobile blood container stores up to four units of blood and platelets at[1]the required temperature of six and two degrees Celsius respectively, for 72 hours without electricity, batteries or ice to aid emergency medics. Crucially, this enabled blood to be brought forward to the battlefield rather than stored at the field hospital.

The environment of the military and the nature of its role means that trauma medicine needs to evolve to deal with the style of injuries it is experiencing: invention through desperation. However, it is important that the care not only reflects the immediate treatment of the patient but also considers their long-term care to ensure they can achieve a high quality of life post-conflict.

What would happen if there was no stigma around mental illness?

Mental Illness

Emily, Year 12, explores why there is a stigma around mental illnesses, how we can get rid of this stigma, and what effect the stigma has on society.

Mental illness is not just one disorder – and many people know that – but what they don’t understand is quite how expansive the list of disorders is. As young girls, we are taught about anxiety, body dysmorphic disorder, depression, addiction, stress, and self-harm but the likelihood is that we know – from personal experience, through friends, family or even social media – that many more mental illnesses exist. For example: bipolar disorder, obsessive-compulsive disorder, schizophrenia, autism and ADHD. Chances are, we all know someone with mental illness whether we know or not – the majority of the time these people function the same way that people with no mental illness do. So why is there such a stigma around mental illness and how can we get rid of the stigma?
When the AIDS epidemic started in the early 1980s, the disease was only affecting minority groups of people who already faced criticism. The disease only furthered this and made the patients virtual pariahs until advocacy groups and communities protested to expand awareness and pressured the U.S. government to fund research for the disease and its cure. In only seven years, scientists were able to: identify that the cause of AIDS was the Human immunodeficiency virus (HIV), create the ELISA test to detect HIV in the blood and establish azidothymidine (AZT) as the first antiretroviral drug to help those suffering from HIV/AIDS. This is a prime example of how public knowledge can lead to science pushing the boundaries of their knowledge and finding treatments. Along with treatments eliminating symptoms, they also eliminate the stigma as more and more people are learning about the disease. So why can’t this be the case for mental illness?

In a time when science wasn’t breaking new boundaries every day, and knowledge wasn’t being distributed properly, it is easy to see why those with such complicated illnesses were feared and had such a stigma surrounding them. However, now when the greatest barrier is access to treatments and not the science, and the education about the subject is as high as it has ever been, it is hard to see why there is still such shame in having these illnesses.

But what if there was no stigma? We would have early identification and intervention in the form of screening mechanisms in primary care settings such as GP, paediatric, obstetrics, and gynaecological clinics and offices as well as schools and universities. The goal would be to screen those who are at risk for or are having symptoms of mental illness and engage the patients in self-care and treatment before the illness severely affects their brains, and lives. We would also have community-based comprehensive care for those who are in more advanced stages of illness. This will support people who are unable to care for themselves and who may otherwise end up homeless, in jail or in mental hospitals.
For example: victims of trauma would be treated for PTSD along with any physical injuries while in the hospital to target PTSD before any symptoms started occurring and the patient could hurt themselves or others; first responders would have preventative and decompression treatments routinely administered to treat PTSD before waiting to see who may or may not show symptoms; mothers would be treated for pre/post-partum depression as a part of pre/post-natal check-ups instead of waiting and potentially harming themselves or their baby. Children with learning disabilities would be identified early on so they could get cognitive training, and emotional support to prevent counterproductive frustration due to something they cannot control.

Medical economists have shown that this method of proactive mental healthcare will actually reduce the cost of delivering it. It will also relieve emotional stress (for the patient and their family), financial burden for treatment, and will reduce the occurrence of many of the very prevalent social problems. We all know about the many mass shootings that occur regularly and a great deal of these crimes have been perpetrated by young males who have an untreated mental illness which have presented symptoms for long before the crime was committed – not that I am excusing their behaviour in any way.

As a worldwide community, we must be able to recognise mental illness for what it is – a medical condition that can be treated, be that with behavioural or cognitive therapy or with medication. In order to dissolve the stigma, we must be involved, ask questions, be kind, be compassionate, and make it our own business. There is only so much science can do if people are not willing to take the help they are being given – they need to want to get better. The only way this will happen is if we all help to make it known that having a mental illness is not a bad thing, and that it is easily treatable, and that they are no different from anyone else.

The Brain Chemistry of Eating Disorders

Jo, Year 13, explores what is happening chemically inside the brains of those suffering from eating disorders and shows how important this science is to understanding these mental health conditions.

The definition of an eating disorder is any range of psychological disorders characterised by abnormal or disturbed eating habits. Anorexia is defined as a lack or loss of appetite for food and an emotional disorder characterised by an obsessive desire to lose weight by refusing to eat. Bulimia is defined as an emotional disorder characterised by a distorted body image and an obsessive desire to lose weight, in which bouts of extreme overeating are followed by fasting, self-induced vomiting or purging. Anorexia and bulimia are often chronic and relapsing disorders and anorexia has the highest death rate of any psychiatric disorder. Individuals with anorexia and bulimia are consistently characterised by perfectionism, obsessive-compulsiveness, and dysphoric mood.

Dopamine and serotonin function are integral to both of these conditions; how does brain chemistry enable us to understand what causes anorexia and bulimia?

Dopamine

Dopamine is a compound present in the body as a neurotransmitter and is primarily responsible for pleasure and reward and in turn influences our motivation and attention. It has been implicated in the symptom pattern of individuals with anorexia, specifically related to the mechanisms of reinforcement and reward in engaging in anorexic behaviours, such as restricting food intake. Dysfunction of the dopamine system contributes to characteristic traits and behaviours of individuals with anorexia which includes compulsive exercise and pursuit of weight loss.

In people suffering from anorexia dopamine levels are stimulated by restricting to the point of starving. People feel ‘rewarded’ by severely reducing their calorie intake and in the early stages of anorexia the more dopamine that is released the more rewarded they feel and the more reinforced restricting behaviour becomes. Bulimia involves dopamine serving as the ‘reward’ and ‘feel good’ chemical released in the brain when overeating. Dopamine ‘rushes’ affect people with anorexia and bulimia, but for people with anorexia starving releases dopamine, whereas for people with bulimia binge eating releases dopamine.

Serotonin

Serotonin is responsible for feelings of happiness and calm – too much serotonin can produce anxiety, while too-little may result in feelings of sadness and depression. Evidence suggests that altered brain serotonin function contributes to dysregulation of appetite, mood, and impulse control in anorexia and bulimia. High levels of serotonin may result in heightened satiety, which means it is easier to feel full. Starvation and extreme weight loss decrease levels of serotonin in the brain. This results in temporary alleviation from negative feelings and emotional disturbance which reinforces anorexic symptoms.

Tryptophan is an essential amino acid found in the diet and is the precursor of serotonin, which means that it is the molecule required to make serotonin. Theoretically, binging behaviour is consistent with reduced serotonin function while anorexia is consistent with increased serotonin activity. So decreased tryptophan levels in the brain, and therefore decreased serotonin, increases bulimic urges.

Conclusions

Distorted body image is another key concept to understand when discussing eating disorders. The area of the brain known as the insula is important for appetite regulation and also interceptive awareness, which is the ability to perceive signals from the body like touch, pain, and hunger. Chemical dysfunction in the insula, a structure in the brain that integrates the mind and body, may lead to distorted body image, which is a key feature of anorexia. Some research suggests that some of the problems people with anorexia have regarding body image distortion can be related to alterations of interceptive awareness. This could explain why a person recovering from anorexia can draw a self-portrait of their body image that is typically 3x its actual size. Prolonged untreated symptoms appear to reinforce the chemical and structural abnormalities in the brains seen in those diagnosed with anorexia and bulimia.

Therefore, in order to not only understand and but also treat both anorexia and bulimia, it is central to look at the brain chemistry behind these disorders in order to better understand how to go about successfully treating them.