Visit to the Francis Crick Institute

Grace S, Year 13 Student, writes about the recent Biology trip to visit the Francis Crick Institute.

WARNING This article will include mentions in a biomedical sense to some topics which some readers may find disturbing, including death, cancer and animal testing.

Last Friday some of the Biology A-level students were privileged enough to go on a trip to the Francis Crick Institute. All sorts of biomedical research goes on inside the Institute, but we went with a focus on looking at the studies into cancer. During our day, we visited the ‘Outwitting Cancer’ exhibition to find out more about the research projects and clinical trials that the Crick Institute is running; we had a backstage tour of the Swanton Laboratory to learn about the genetic codes of tumours and find out more about immunotherapy and I attended a discussion on the impact pollution can have on the lungs.

We started our trip by visiting the public exhibition ‘Outwitting Cancer’, with Dr Swanton as our tour guide. We first walked through a film showing how tumours divide and spread using representations from the natural and man-made world. This film also showed that tumours are made up of cancer cells and T-cells (cells involved in the immune response) trying to regulate the growth of the cancer cells. We then moved through to an area where several clips were playing outlining the different projects underway at the Crick Institute in regards to cancer. There were many different projects on display about different clinical trials and research projects looking into understanding and fighting cancer, but the one which fascinated me the most involved growing organoids (otherwise known as mini-organs) from stem cells. The stem cells would be extracted from the patient and used to grow these organoids, which would then be used to see how they respond to different drugs. This would allow each treatment to be highly specified to the patient, and so perhaps lead to higher survival rates among these patients. In this same section of the exhibition there was a rainbow semi-circle of ribbons with stories clipped to these ribbons written by visitors to the exhibit of their experiences with cancer, ranging from those who have a lived experience, to those who are simply curious to learn more. It was a fantastic exhibit and I recommend you give it a visit yourself, it’s free!

As interesting as this exhibit was, for us the highlight was a backstage tour of the Swanton Laboratory followed by talks from members of the team working there. We learnt that they have found that there is homogeneity within tumours, a fact that was not known just a few years ago. What this means is that different sections of tumours have completely different genetic codes. This could significantly change the way which tumours are analysed and treatments are prescribed. Previously, one tumour sample was thought to be representative of the whole tumour, it is now known that this is not the case and multiple samples from different sections of the tumour should be taken to get a comprehensive view of its structure and how best it could be treated. Linked to this, one member of the team, a final year PhD student, showed us graphics which they had been able to take and colour of the different cells present in a tumour. One of the main reasons cancer develops to the point where treatment is needed is because the body’s immune system has failed to neutralise the cancer cells, they were working to find out why this may be. In one of the graphics shown to us, a different type of immune cell had actually formed a wall around the T-cells, preventing them from reaching the cancer cells in order to eliminate them.  This would be important knowledge when considering immunotherapy treatments, which encourage the body’s own immune system to fight back against the cancer. In this case there would be little benefit to injecting or strengthening T-cells, as they would not be able to reach the cancer cells. Immunotherapy itself is still a relatively recent invention, and it is still considered only after treatments such as chemotherapy have not been effective. By this stage the cancer is more advanced and much harder to treat with immunotherapy, so it is hoped that in the future immunotherapy will be considered before more generalised treatments such as chemotherapy.

Work is also being done to understand late-stage cancer. We were allowed into one of the stations where practical work is done (wearing red lab coats to indicate that we were visitors) and shown a series of slides showing where biopsies (a biopsy is the removal of a tissue sample) might be taken from a tumour. It was explained to us that TRACERx (the name of the project being undertaken in the Swanton Laboratory) had set up a programme where people living with late-stage cancer can consent to their tumours being used for post-mortem research. Often these individuals had also signed up for earlier programmes, so information on their cancer at earlier stages was available and it was possible to see how the cancer had progressed. It was also explained to us several of the methods used to store samples, including the use of dry ice (solid carbon dioxide) and liquid nitrogen.

The final presentation I attended (we were on a carousel in small groups) discussed the influence of pollution on lung cancer. It had previously been found that as we age, the number of mutations we have grows, so clearly mutations are not the sole cause of cancer as not everyone develops cancer. It has now been theorised that carcinogens, such as particle matter found in air pollution, activate these pre-existing mutations. Currently non-smokers comprise 14% of all lung cancer cases in the UK, as the number of smokers drops as people become more aware of the dangers of smoking, the proportion of people with lung cancer who are non-smokers will increase, making research as to what may cause this lung cancer even more important. Lung cancer in non-smokers is currently the 8th largest cause of death to cancer in the UK. Two on-going experiments are studying the effect of exposure to pollution on mutations in the lungs. One is being run within the Institute, exposing mice to pollution, and another in Canada, where human volunteers are exposed to the level of pollution average in Beijing for two hours. Whilst it is unlikely that this exposure will lead to new mutations, it may cause changes in those already present. All of the research projects presented to us are ongoing, and it really was a privilege to see what sort of work is going on behind the scenes.

All of us were incredibly lucky to be able to go on this trip and meet some of the scientists working on such fascinating projects within the Francis Crick Institute. Most of us were biologically-minded anyway, but were we not, this trip certainly would have swayed us.

Antimicrobials in agriculture: a short introduction

Leah in Year 12 explores why the use of antimicrobials in commercial livestock agricultural medicine is such a contested issue.

The World Health Organization (WHO) has declared Antimicrobial Resistance (AMR) as one of the Top 10 public health threats facing humanity[1]. While the use of antimicrobials has proved a huge boost to the livestock industry, in terms of faster growth in animals, the concern is that higher concentration of AMR in the food chain risks diminishing how effective antimicrobials will be in the human population.

Antimicrobials are agents used to kill unwanted microorganisms by targeting the ones that may pose a threat to humans and animals. These include antibiotics and antibacterial medication for bacterial infections, antifungal medication for fungal infections and antiparasitic medication for parasites.   

Resistance, or AMR, develops when antimicrobials are not used for their full course, so the weakest strains of bacteria are killed but the strongest ones survive. This means that the strongest strains then have the chance to adapt to the environment. Over time these variant alleles become more common in the population and the antimicrobials will become ineffective.  

As Shown in the graph below[2]from 2015 to 2019, antimicrobial use in farming has actually decreased by 45% but since then has picked up again despite the issue becoming more widespread.

Antimicrobials are used often in the production of both meat and animal product farming such as dairy cows, with two main objectives: promoting growth and preventing disease.  

The prevention of diseases in livestock are less of a contested issue as it is well understood that livestock share living spaces, food, and water. Generally, if one animal contracts a disease then the whole flock is at risk due to the proximity. Antimicrobials can help break this link.  

However, the WHO has a strong viewpoint with antimicrobials as a growth agent.[3]As stated in their guidelines to reducing antimicrobial resistance, the organization believes that ‘antimicrobials should not be used in the absence of disease in animals.’[4]This happens by helping convert the feed that the farmers are giving to their livestock into muscle to cause more rapid growth. The quicker the animal meets slaughter weight the quicker they can send them off to an abattoir and get the profit. For example, a 44kg lamb produces 25kg of meat which is the heaviest a lamb can be so farmers want their lambs to reach 44kg so that they can get the most money from each lamb.  

Image via Pixabay

Over 400,000 people die each year from foodborne diseases each year. With the rise in antimicrobial resistance, these diseases will start to become untreatable. The most common pathogens transmitted through livestock and food to humans are non-typhoidal Salmonella (NTS), Campylobacter, and toxigenic Escherichia coli.  Livestock contain all of these pathogens and so they can spread easily.

The WHO have been addressing AMR since 2001 and are advocating for it to become a more acknowledged issue. In some countries, the use of antimicrobials is already controlled. The US Food and Drugs Association (FDA) has been acting on this matter since 2014 due to the risk on human health.

Antimicrobial Resistance is a contested issue because, as much as AMR is a problem that has a variety of governing bodies backing it, there will always be the point that farmers rely on their livestock for their livelihoods meaning they are often driven by profit to ensure income. Antimicrobial Resistance has always been hard to collect evidence on, so this makes it harder to prove to these farmers that it is a growing issue.


References


[1] World Health Organization, WHO guidelines on use of medically important antimicrobials in food producing animals, 7 November 2017, https://www.who.int/foodsafety/areas_work/antimicrobial-resistance/cia_guidelines/en/
Accessed:  24th April 2021

[2] World Health Organization, Antimicrobial Resistance in in the food chain, November 2017, https://www.who.int/foodsafety/areas_work/antimicrobial-resistance/amrfoodchain/en/
Accessed:  24th April 2021

[3] World Health Organization, Ten threats to global health in 2019, https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019 Accessed:  24th April 2021

[4] Farm Antibiotics – Presenting the Facts, Progress across UK farming, https://www.farmantibiotics.org/progress-updates/progress-across-farming/
Accessed:  24th April 2021

Impact study: the spread of imported disease in Australia and New Zealand

Sophia (Year 13) looks at how European colonialism spread disease to Australia and New Zealand.

Although the tragedies brought by actions of colonisers such as slavery, wars and other abysmal treatment of native populations caused many deaths, one of the biggest killers of this action was the introduction of new diseases to which natives had no immunity due to their previous isolation from the European invaders.

Image from Pexels

Between 1200 and 1500 Europe itself was suffering several pandemics due to the growth of unsanitary cities, creating the perfect environment for infection, and also increasing contact with the Old World, such as through Mongol and Turkish invasions, which exposed Europe to major disease outbreaks. For example, between 1346-51, the Black Death killed off about a third of Europe’s population. As relatively disease-hardened populations in Europe emerged from this, although local epidemics emerged after 1500, none were really as bad as the years before it, rather such epidemics after this date were in colonised nations. Here I will focus on the colonisation of Australia and New Zealand, with different native peoples (the Aborigines and the Maoris) and with different effects of diseases.

New Zealand

Imported diseases began to impact many Maori from the 1790s. These diseases were those such as viral dysentery, influenza, whooping cough, measles, typhoid, venereal diseases, and the various forms of tuberculosis. Missionaries and observers reported massive death rates and plummeting birth rates. However, unlike the Americas and Australia, there is a big chance that the deaths as a result of foreign disease are widely exaggerated.

Rather, such exaggeration labelled the Maori as a dying race (a view which persisted to 1930), which helped to project the British Empire into New Zealand in 1840. One of the reasons for which the effect of disease was probably the smallest was simply the distance from Europe to New Zealand; it was a natural quarantine. The trip took 4 months or more, meaning that the sick either died or recovered; either way they were often no longer infectious on arrival. Therefore, the most pernicious European diseases – malaria, bubonic plague, smallpox, yellow fever, typhus and cholera – did not manage to transfer to New Zealand.

Image by Dan Whitfield via Pexels

Another factor which fostered the gross magnification of the demise of the Maori was the comparison in birth rates; missionary families were extremely large – the fourteen couples who went to New Zealand before 1833 had 114 children. Therefore, it was easy to amplify the decline in Maori birth rates into something far more serious than it was. The population of Maori on contact with the Europeans are very unreliable and, in most cases, wild guesses, and also allow for the misjudgement of the effect of the disease. For example, one estimate for 1769 based upon archaeological science gives an estimated pre-contact birth rate of 37 per thousand per year, and a death rate of 39[1], obviously impossible given that it leaves the Maori population in the minus-thousands. However, more moderate calculations suggest an average decline of 0.3% per year between 1769 and 1858[2]. Therefore, although the Maori population somewhat suffered as a result of these diseases, there is a tendency to exaggerate this, to portray them as ‘weaker’ peoples, and a dying race, allowing for easier colonisation.

Australia

Although Australia was initially discovered by the Dutch, it was a fleet of British ships which arrived at Botany Bay in January 1788 to establish a penal colony[3].  European disease spread to areas of Australia, even before Europeans had reached those parts. For example, there was a smallpox epidemic near Sydney in 1789, wiping out around half of the Aborigines there.[4] 

Photo by Damon Hall from Pexels

Some historians claim that this was acquired through contact with Indonesian fishermen in the far north, which then spread, and others argue that it is likely that the outbreak was a deliberate act by British marines when they ran out of ammunition and needed to expand their settlement. Indeed, unfortunately colonial thinking at the time placed Europeans as the ‘superior race’; a book written by William Westgarth in 1864 on the colony of Victoria included: ‘the case of the Aborigines of Victoria confirms…it would seem almost an immutable law of nature that such inferior dark races should disappear’. Therefore, as with New Zealand, description of the natives as a ‘dying race’ was an important tool for colonisation, meaning purposeful introduction and spread of some diseases is not too hard to believe.

Smallpox spread between Aboriginal communities, reappearing in 1829-30; according to one record killing 40-60% of the Aboriginal population.[5]  In addition, during the mid-late 19th century, many Aborigines in southern Australia were forced to move to reserves; the nature of many of these institutions enabled disease to spread quickly and many even began to close down as their populations fell.

Conclusion

Although one must be wary of statistics about native mortality rates in both countries, given the European tendency to exacerbate the decline in native populations, it is fair to say that the decline in Aboriginal populations was much higher than that of the Maori in New Zealand, although wars also massively contributed to this.

While roughly 16.5% of the New Zealand population is Maori, only 3.3% of Australians are aboriginal, and it is safe to say that disease influenced this to some extent. So why was there such a difference between the effects of diseases in these countries, seemingly close together and both colonised by the British? A very large reason was smallpox; this was by far the biggest killer in Australia, but never reached New Zealand. The nature of the existing native communities was also important; there were 200-300 different Aboriginal nations in Australia, all with different languages, but the Maori were far more united, and often seen to be a more ‘advanced’ society, and therefore were never forcibly placed into reserves; which is where a lot of the spread of disease took place.

In addition, events in New Zealand occurred much later than Australia, after slavery had been outlawed, meaning there was a slightly more humanitarian approach, and there is less evidence to suggest purposeful extermination of the Maori. This is not to discount any injustices suffered by the Maori; indeed, many did die from European disease, and in both cases the native populations were treated appallingly and were alienated from their land.

The influence of European disease was overwhelmingly more powerful in Australia. However, one must approach statistics about the effect of disease on native peoples with caution, as Europeans tended to exaggerate this area to portray such peoples as ‘dying races’, a device often used to support colonisation.


Bibliography

Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991

James Belich, Making Peoples (New Zealand: Penguin Books), 1996

John H. Chambers, A Traveller’s History of New Zealand and the South Pacific Islands (Great Britain: Phoenix in association with the Windrush Press), 2003


[1] cited in Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991, 35

[2] Ibid, 56

[3] wikipedia cites Lewis, Balderstone and Bowan (2006), 25

[4] Judy Campbell, Invisible Invaders: Smallpox and other Diseases in Aboriginal Australia 1780-1880 (Australia: Melbourne University Press), 2002, 55

[5] wikipedia cites Richard Broome, Arriving (1984), 27

How are organoids going to change biomedical research?

Microscope

Kate in Year 13 explores how organoids are going to contribute to biomedical research. 

At the moment, biomedical research is almost exclusively carried out in animal models. Although this has led to a better understanding of many fundamental biological processes, it has left gaps in our understanding of human specific development. In addition to this, the variability of human individuals is in sharp contrast to inbred animal models, leading to a deficiency in our knowledge about population diversity.

These limitations have forced scientists to invent a new way of looking at and understanding how the human body works; their conclusions were organoids.

An Organoid (Wikipedia)

Organoids are a miniaturised and simplified version of an organ produced in vitro in 3D which shows realistic micro-anatomy. They originate from renewable tissue sources that self-organise in culture to acquire in vivo-like organ complexity. There are potentially as many types of organoids as there are different tissues and organs in the body. This provides many opportunities such as allowing scientists to study mechanisms of disease acting within human tissues, generating knowledge applicable to preclinical studies as well as being able to offer the possibility of studying human tissues at the same if not higher level of scientific scrutiny, reproducibility and depth of analysis that has been possible only with nonhuman model organisms.

Organoids are going to revolutionise drug discovery and accelerate the process of bringing much needed drugs to reality. Nowadays, the process averages around 20 years from conception to reality. This is a lengthy process mainly due to the fact that the pharmaceutical industry has relied on animal models and human cell lines that have little resemblance to normal or diseased tissue – possibly one of the reasons behind the high failure rate of clinical trials adding to the high cost of drug discovery – an average of $2 billion for each new drug that reaches the pharmacy.

Organoids can help this development by using human cells instead of animal cells due to the improved compatibility, making it quicker and more efficient. Organoids are also able to provide a better understanding of human development.

Organoid graph
Above: Uses of organoids from https://blog.crownbio.com/key-organoid-applications

The human brain, especially the neocortex (which is the part of the mammalian brain involved in higher-order brain functions such as sensory perception, cognition, spatial reasoning and language), has evolved to be disproportionally larger compared with that of other species. A better understanding of this species-dependant difference through brain organoids will help us gain more knowledge about the mechanisms that make humans unique, and may aid the translation of findings made in animal models into therapeutic strategies answering the question what makes humans human.

Organoids are the future of biomedical research providing the potential to study human development and model disease processes with the same scrutiny and depth of analysis customary for research with non-human model organisms. Resembling the complexity of the actual tissue or organ, patient derived human organoid studies will accelerate medical research and generate knowledge about human development which is going to dramatically change the way we are going to study biology in the future.

Invention through desperation – military medical advancements

Military

Jessica, Year 13, explores military medical advancements in recent conflicts, discussing their impact and whether the nature of war acts as an inspiration for innovation.

In 2001, the conflict in Afghanistan began, continuing until a majority of British troops withdrew in the final months of 2014. During these years, 6,386 British personnel were injured, with 28 fatalities, leaving the survival rate at 99.6%.

This was unheard of in previous wars and a major success story for military medicine. However, the injuries and trauma to the soldiers during this period of time increasingly involved haemorrhaging and amputations due to gunshot wounds and IEDs (also known as improvised explosive devices – a type of unconventional crude homemade bomb). These IEDs cause extensive blood loss which has been attributed to 50% of combat deaths since World War Two. In order for these soldiers to survive, a change had to be made in the form of military medicine to preserve life and limb. There are three major advancements in military trauma medicine which all arose from the need to problem-solve solutions to the new injuries personnel and the medics were now witnessing.

The first is haemostatic dressings. During the period of the Afghanistan conflict, two new dressings were developed: XSTAT and QuickClot powder which contain components such as fibrinogen and thrombin catalysing the natural coagulation response. XSTAT uses 92 medical sponges in a pocket-sized injector to pack an open wound and halt bleeding within fifteen seconds. XSTAT increases the chance of survival and holds pressure until the patient can reach a medical centre. They also contain a molecule which is visible on an X-ray to ensure all sponges are removed later to prevent infection.

Secondly, there was a development in the traditional tourniquet. A tourniquet is a constricting or compressing device used to control venous and arterial blood flow to a portion of an extremity for a period of time. This is possible because it creates pressure equal to or higher than the patient’s systolic blood pressure. The single hand tie tourniquet is a development from the original tourniquet used by army medics which had to be applied by the medic and thus were only carried by them. Without the patient being able to apply their own tourniquet, crucial time and blood was lost whilst the medic reached the injured individual, reducing their chance of survival as well as increasing the complexity of their treatment and injuries. This is when the Clinical Application Tourniquet (CAT) was developed and introduced into the US Army in 2005. It was the first single-hand tie tourniquet, allowing the soldiers to treat their own injuries immediately until the medic could attend and provide more advanced care. The tourniquet distributes pressure over a greater area which is advantageous because it reduces the underlying tissue and nerve damage, preventing it from becoming ischemic, a deficient supply of blood, whilst remaining effective. This decrease in time before a tourniquet is used has decreased the mortality rate due to haemorrhaging by 85%.

A third category of advancements is in the use of blood and the way it is transported. Blood and blood products, such as platelets, are crucial in the treatment of haemorrhaging and amputations. However, in order for it to be viable for transfusion, it must be maintained in a cool, constant environment, far from the natural one in Afghanistan. This was previously a significant disadvantage and contributed to the low survival rates for haemorrhaging but improved with the development of the blood container. The Golden-Hour mobile blood container stores up to four units of blood and platelets at[1]the required temperature of six and two degrees Celsius respectively, for 72 hours without electricity, batteries or ice to aid emergency medics. Crucially, this enabled blood to be brought forward to the battlefield rather than stored at the field hospital.

The environment of the military and the nature of its role means that trauma medicine needs to evolve to deal with the style of injuries it is experiencing: invention through desperation. However, it is important that the care not only reflects the immediate treatment of the patient but also considers their long-term care to ensure they can achieve a high quality of life post-conflict.

What would happen if there was no stigma around mental illness?

Mental Illness

Emily, Year 12, explores why there is a stigma around mental illnesses, how we can get rid of this stigma, and what effect the stigma has on society.

Mental illness is not just one disorder – and many people know that – but what they don’t understand is quite how expansive the list of disorders is. As young girls, we are taught about anxiety, body dysmorphic disorder, depression, addiction, stress, and self-harm but the likelihood is that we know – from personal experience, through friends, family or even social media – that many more mental illnesses exist. For example: bipolar disorder, obsessive-compulsive disorder, schizophrenia, autism and ADHD. Chances are, we all know someone with mental illness whether we know or not – the majority of the time these people function the same way that people with no mental illness do. So why is there such a stigma around mental illness and how can we get rid of the stigma?
When the AIDS epidemic started in the early 1980s, the disease was only affecting minority groups of people who already faced criticism. The disease only furthered this and made the patients virtual pariahs until advocacy groups and communities protested to expand awareness and pressured the U.S. government to fund research for the disease and its cure. In only seven years, scientists were able to: identify that the cause of AIDS was the Human immunodeficiency virus (HIV), create the ELISA test to detect HIV in the blood and establish azidothymidine (AZT) as the first antiretroviral drug to help those suffering from HIV/AIDS. This is a prime example of how public knowledge can lead to science pushing the boundaries of their knowledge and finding treatments. Along with treatments eliminating symptoms, they also eliminate the stigma as more and more people are learning about the disease. So why can’t this be the case for mental illness?

In a time when science wasn’t breaking new boundaries every day, and knowledge wasn’t being distributed properly, it is easy to see why those with such complicated illnesses were feared and had such a stigma surrounding them. However, now when the greatest barrier is access to treatments and not the science, and the education about the subject is as high as it has ever been, it is hard to see why there is still such shame in having these illnesses.

But what if there was no stigma? We would have early identification and intervention in the form of screening mechanisms in primary care settings such as GP, paediatric, obstetrics, and gynaecological clinics and offices as well as schools and universities. The goal would be to screen those who are at risk for or are having symptoms of mental illness and engage the patients in self-care and treatment before the illness severely affects their brains, and lives. We would also have community-based comprehensive care for those who are in more advanced stages of illness. This will support people who are unable to care for themselves and who may otherwise end up homeless, in jail or in mental hospitals.
For example: victims of trauma would be treated for PTSD along with any physical injuries while in the hospital to target PTSD before any symptoms started occurring and the patient could hurt themselves or others; first responders would have preventative and decompression treatments routinely administered to treat PTSD before waiting to see who may or may not show symptoms; mothers would be treated for pre/post-partum depression as a part of pre/post-natal check-ups instead of waiting and potentially harming themselves or their baby. Children with learning disabilities would be identified early on so they could get cognitive training, and emotional support to prevent counterproductive frustration due to something they cannot control.

Medical economists have shown that this method of proactive mental healthcare will actually reduce the cost of delivering it. It will also relieve emotional stress (for the patient and their family), financial burden for treatment, and will reduce the occurrence of many of the very prevalent social problems. We all know about the many mass shootings that occur regularly and a great deal of these crimes have been perpetrated by young males who have an untreated mental illness which have presented symptoms for long before the crime was committed – not that I am excusing their behaviour in any way.

As a worldwide community, we must be able to recognise mental illness for what it is – a medical condition that can be treated, be that with behavioural or cognitive therapy or with medication. In order to dissolve the stigma, we must be involved, ask questions, be kind, be compassionate, and make it our own business. There is only so much science can do if people are not willing to take the help they are being given – they need to want to get better. The only way this will happen is if we all help to make it known that having a mental illness is not a bad thing, and that it is easily treatable, and that they are no different from anyone else.

The Brain Chemistry of Eating Disorders

Jo, Year 13, explores what is happening chemically inside the brains of those suffering from eating disorders and shows how important this science is to understanding these mental health conditions.

The definition of an eating disorder is any range of psychological disorders characterised by abnormal or disturbed eating habits. Anorexia is defined as a lack or loss of appetite for food and an emotional disorder characterised by an obsessive desire to lose weight by refusing to eat. Bulimia is defined as an emotional disorder characterised by a distorted body image and an obsessive desire to lose weight, in which bouts of extreme overeating are followed by fasting, self-induced vomiting or purging. Anorexia and bulimia are often chronic and relapsing disorders and anorexia has the highest death rate of any psychiatric disorder. Individuals with anorexia and bulimia are consistently characterised by perfectionism, obsessive-compulsiveness, and dysphoric mood.

Dopamine and serotonin function are integral to both of these conditions; how does brain chemistry enable us to understand what causes anorexia and bulimia?

Dopamine

Dopamine is a compound present in the body as a neurotransmitter and is primarily responsible for pleasure and reward and in turn influences our motivation and attention. It has been implicated in the symptom pattern of individuals with anorexia, specifically related to the mechanisms of reinforcement and reward in engaging in anorexic behaviours, such as restricting food intake. Dysfunction of the dopamine system contributes to characteristic traits and behaviours of individuals with anorexia which includes compulsive exercise and pursuit of weight loss.

In people suffering from anorexia dopamine levels are stimulated by restricting to the point of starving. People feel ‘rewarded’ by severely reducing their calorie intake and in the early stages of anorexia the more dopamine that is released the more rewarded they feel and the more reinforced restricting behaviour becomes. Bulimia involves dopamine serving as the ‘reward’ and ‘feel good’ chemical released in the brain when overeating. Dopamine ‘rushes’ affect people with anorexia and bulimia, but for people with anorexia starving releases dopamine, whereas for people with bulimia binge eating releases dopamine.

Serotonin

Serotonin is responsible for feelings of happiness and calm – too much serotonin can produce anxiety, while too-little may result in feelings of sadness and depression. Evidence suggests that altered brain serotonin function contributes to dysregulation of appetite, mood, and impulse control in anorexia and bulimia. High levels of serotonin may result in heightened satiety, which means it is easier to feel full. Starvation and extreme weight loss decrease levels of serotonin in the brain. This results in temporary alleviation from negative feelings and emotional disturbance which reinforces anorexic symptoms.

Tryptophan is an essential amino acid found in the diet and is the precursor of serotonin, which means that it is the molecule required to make serotonin. Theoretically, binging behaviour is consistent with reduced serotonin function while anorexia is consistent with increased serotonin activity. So decreased tryptophan levels in the brain, and therefore decreased serotonin, increases bulimic urges.

Conclusions

Distorted body image is another key concept to understand when discussing eating disorders. The area of the brain known as the insula is important for appetite regulation and also interceptive awareness, which is the ability to perceive signals from the body like touch, pain, and hunger. Chemical dysfunction in the insula, a structure in the brain that integrates the mind and body, may lead to distorted body image, which is a key feature of anorexia. Some research suggests that some of the problems people with anorexia have regarding body image distortion can be related to alterations of interceptive awareness. This could explain why a person recovering from anorexia can draw a self-portrait of their body image that is typically 3x its actual size. Prolonged untreated symptoms appear to reinforce the chemical and structural abnormalities in the brains seen in those diagnosed with anorexia and bulimia.

Therefore, in order to not only understand and but also treat both anorexia and bulimia, it is central to look at the brain chemistry behind these disorders in order to better understand how to go about successfully treating them.

 

How fungi help trees to communicate

Freya, Year 13, explores how trees are able to communicate and help each other using a network of fungi in the soil.

Underneath your feet there could be possibly 300 miles of fungi stacked in the soil. This special network of fungi , called the mycorrhizal network , brings together fungi and trees in a symbiotic relationship which helps trees to communicate, coined the ‘wood wide web’. You may have unknowingly seen mycorrhizae before; it is long and white and looks a bit like silly string.

When a tree seed is germinating, its roots grow towards the fungi in the soil. In return for nutrients and water from the fungi, trees send sugars down to them. This is of important value to fungi as they cannot photosynthesise (and so make their own sugars, which are needed for growth). Not only does the network connect the fungi and trees, but also the different trees in a given area. All the trees whose roots grow into mycorrhizae are linked in a network. This allows the trees to communicate.

Using the mycorrhizal network, a tree that has been taken over by a certain pest can send danger signals to other trees. When other trees pick up this signal, they release their own chemicals above ground to attract the predators of the pest towards them, thereby reducing the population of pests.

Amazingly, when a tree ‘knows’ it’s dying it will do everything it can to aid the survival of the trees around it. Researchers noted that as an injured tree was dying, it sent all of its carbon down through its roots into the mycorrhizal network so that it could be absorbed by neighbouring trees. In doing so, these neighbouring trees were strengthened.

The driving researcher behind this work, Suzanne Simard, found that trees will help each other out when they’re in a bit of shade. She used carbon -14 trackers to monitor the movement of carbon from one tree to another. She found that the trees that grew in more light would send more carbon to the trees in shade, allowing them to photosynthesise more and so helping them provide food for themselves. At times when one tree had lost its leaves and so couldn’t photosynthesise as much, more carbon was sent to it from evergreen trees.

This discovery could be used in the future to reduce the disastrous effects of deforestation. If loggers keep the network of fungi intact, with many of the oldest trees still present, new trees planted will be able to utilise and reuse carbon more efficiently thanks to the wood wide web.

Nanotechnology and its future in medicine – 07/09/18

Maya (Year 11), discusses the uses of nanotechnology in medicine, thinking about how far it has come and helped doctors. She also considers the dangerous aspects of using such small technology and the future benefits it may bring.

Technology in medicine has come far and with it the introduction of nanotechnology. Nanotechnology is the action of manipulating structures and properties at an atomic and molecular level as the technology is so small; it being one-billionth of a metre. This technology has many uses such as electronics, energy production and medicine and is useful in its diverse application. Nanotechnology is useful in medicine because of its size and how it interacts with biological molecules of the same proportion or larger. It is a valuable new tool that is being used for research and for combatting various diseases.

In medicine, nanotechnology is already being used in a wide variety of areas, the principle area being cancer treatment. In 2006 a report issued by NanoBiotech Pharma stated that developments related to nanotechnology would mostly be focused on cancer treatments. Thus, drugs such as Doxil, used to treat ovarian cancer will use nanotechnology to evade and surpass the possible effects of the immune system enabling drugs to be delivered to the disease-specific areas of the body. Nanotechnology is also helping in neuroscience where European researchers are currently using the technology to carry out electrical activity across dead brain tissue left behind by strokes and illnesses. The initial research was carried out to get a more in-depth analysis of the brain and to create more bio-compatible grids (a piece of technology that surgeons place in the brain to find where a seizure has taken place). Thus, it is more sophisticated than previous technologies which, when implanted, will not cause as much damage to existing brain tissue.

Beyond help in combatting cancer and research, nanotechnology is used in many areas in medicine from appetite control to medical tools, bone replacement and even hormone therapy. Nanotechnology is advancing all areas of medicine with Nano-sized particles enhancing new bone growth and additionally, there are even wound dressings that contain Nano-particles that allow for powerful microbial resistance. It is with these new developments that we are revolutionising the field of medicine, and with more advancements, we will be able to treat diseases as soon as they are detected.

Scientists are hoping that in the future nanotechnology can be used even further to stop chemotherapy altogether; fighting cancer by using gold and silica particles combined with nanotechnology to bind with the mutated cells in the body and then use infra-red lasers to heat up the gold particles and kill the tumour cells. This application would be beneficial as it would reduce the risk of surrounding cells being damaged as the laser would not affect them as much as the chemotherapy would.

In other areas, nanotechnology is further developing with diagnostics and medical data collection. This means that by using this technology, doctors would be able to look for the damaged genes that are associated with particular cancers and screen the tumour tissue faster and earlier than before. This process involves the Nano-scale devices being distributed through the body to detect chemical changes. There is also an external scan by use of quantum dots on the DNA of a patient which is then sequenced to check if they carry a particular debilitating genome, therefore providing a quicker and easier method for doctors to check in detail if a patient has contracted any illnesses or diseases. Furthermore, doctors will be able to gain a further in-depth analysis and understanding of the body by use of nanotechnology which surpasses the information found from x-rays and scans.

While this is a great start for nanotechnology, there is still little known about how some of the technology might affect the body. Insoluble nanotechnology for example, could have a high risk of building up in organs as they cannot diffuse into the bloodstream. Or as the nanoparticles are so small, there is no controlling where they could go, which might lead to Nano-particles entering cells and even their nuclei, which could be very dangerous for the patient. The science and technology committee from the House of Lords have reported concerns about nanotechnology on human health, stating that sufficient research has not been conducted on “understanding the behaviour and toxicology of nanomaterials” and it has not been given enough priority especially with the speed at which nanotechnology is being produced.

Nanotechnology is advancing medical treatment at a rapid rate, with new innovative technologies approved each year to help combat illnesses and diseases. Whilst more research needs to be conducted, the application of Nano-medicine will provide a platform of projected benefits that has potential to be valuable. Overall with the great burden that conditions like cancer, Alzheimer’s, HIV and cardiovascular diseases impose on the current healthcare systems, nano-technology will revolutionise healthcare with its advances techniques in the future as it progresses.

@Biology_WHS 

Critical Thinking: “the important thing is not to stop questioning.” – Albert Einstein

Richard Gale, teacher of Biology at WHS, looks at the value of critical thinking and how we can use this to help make logical and well-structured arguments.

At some point we all accept a fact or an opinion without challenging it, especially if we deem the person telling us the fact or opinion to be in a position of authority.

Challenging or questioning these people can seem daunting and rude, or at worst we could appear ignorant or stupid. However, if we never challenged or questioned ideas or perceived facts then the world would still be considered to be flat, and we would not have the theories of relativity or evolution.

This is what Einstein is getting at, that all ideas and preconceived facts should be questioned otherwise society will stagnate and no longer advance in any field of study. This process of constantly asking questions and challenging ideas is known as critical thinking.

It is said that someone who is a critical thinker will identify, analyse, evaluate and solve problems systematically rather than by intuition or instinct; almost a list of higher order thinking skills from Bloom’s taxonomy. The reason for placing critical thinking as a key higher order skill is because, as Paul and Elder (2007) noted “much of our thinking, left to itself, is biased, distorted, partial, uninformed or down-right prejudiced.  Yet the quality of our life and that of which we produce, make, or build depends precisely on the quality of our thought.”

In essence, critical thinking requires you to use your ability to reason. It is about being an active learner rather than a passive recipient of information by asking questions to understand the links that exist between different topics. It requires learners to weigh up and determine the importance and relevance of evidence and arguments, identifying arguments that are weak and those that are stronger; to build and appraise their own arguments, identify inconsistences and errors in arguments and reasoning, doing all of this in a systemic and consistent way. Then they should reflect on the justification of their own assumptions, beliefs and values. As Aristotle put it “it is the mark of an educated mind to be able to entertain a thought without accepting it.”

Critical thinkers rigorously question ideas and assumptions rather than accepting them at face value. They will always seek to determine whether the ideas, arguments and findings represent the entire picture and are open to finding that they do not. In principle anyone stating a fact or an opinion, and I am definitely including myself here as a teacher, should be able to reason why they hold that fact or opinion when asked questions and should be able to convince a class or an individual that those ideas have merit. Equally, as I know my pupils would attest too, pupils should be able to reason why they hold their opinions or ideas when questioned. Whilst this may seem daunting and at times a bit cruel, being able to think critically has become a very important skill with the onset of the new A levels.

In Biology, under the reformed linear A level, there has been in increase in the percent of marks awarded for higher order thinking skills, termed A02 and A03. A02 is to ‘apply knowledge and understanding of scientific ideas, processes, techniques and procedures’ whereas A03 is ‘analyse, interpret and evaluate scientific information, ideas and evidence, including in relation to issues.’ This is weighted between 40-45% of marks for A02 and 25-30% for A03 skills of the overall percentage across the three papers. The pupils taking the exams are expected to critically interpret data and theories, as well as analysing and interpreting the information they have learnt in completely novel situations. The following quote from Carl Segan is now more significant as knowing facts is no longer enough for pupils to succeed: “knowing a great deal is not the same as being smart; intelligence is not information alone but also judgment, the manner in which information is collected and used.”

Thankfully, we can develop and train ourselves – and others – to be critical thinkers. There are a plethora of guides and talks on how to we can develop our skills as critical thinkers, and choosing which one is most useful is tricky and to an extent futile as they all repeat the same basic principles but with different language and animations. I have tried to summarise these as follows:

  1. Always ask questions of the fact or information provided and keep going until you are satisfied that the idea has been explained fully.
  2. Evaluate the evidence given to support the idea or fact; often miss-conceptions are based on poor data or interpretations. What is the motive of the source of the information, is there bias present? Do your research and find the arguments for and against, which is stronger and why?
  3. Finally, do not assume you are right, remember we ourselves have bias and we should challenge our own assumptions. What is my truth? What are the truths of others?

We can practise these skills when we are in any lesson or lecture, as well as when we are reading, to help develop a deeper understanding of a text. Evaluating an argument requires us to think if the argument is supported by enough strong evidence.

Critical thinking skills can be practised at home and in everyday life by asking people to provide a reason for a statement. This can be done as they make it or by playing games, such as you have to swap three items you current have for three things you want, and then rationalising each choice. You can even engage in a bit of family desert island discs, taking it in turn to practise your Socratitic questioning (treat each answer with a follow up question).

There are a few pitfalls to consider when engaging with critical thinking; the first of these is ignorant certainty. This is the belief that there are definite correct answers to all questions. Remember that all current ideas and facts our just our best interpretation of the best information or data we currently have to hand and all of them are subject to re-evaluation and questioning. The next one is more relevant to critical thinking and is naïve relativism – the belief that all arguments are equal. While we should consider all arguments we cannot forget that some arguments are stronger than others and some are indeed wrong. Even Isaac Newton, genius that he was, believed that alchemy was a legitimate pursuit.

Critical thinking is not easy; you have to be prepared to let go of your own beliefs and accept new information. Doing so is uncomfortable, as we base ourselves on our beliefs but ultimately it is interesting and rewarding. As you explore your own beliefs and those of others through questioning, evaluating, researching and reviewing, know that this is enriching your ability to form arguments and enhancing your opinions and thoughts. You do not know what you will discover and where your adventure will take you, but it will take you nearer to the truth, whatever that might be. Whilst on your journey of lifelong learning remember to “think for yourselves and let others enjoy the privilege to do so, too” (Voltaire).

Follow @STEAM_WHS and @Biology_WHS on Twitter.