Antimicrobials in agriculture: a short introduction

Leah in Year 12 explores why the use of antimicrobials in commercial livestock agricultural medicine is such a contested issue.

The World Health Organization (WHO) has declared Antimicrobial Resistance (AMR) as one of the Top 10 public health threats facing humanity[1]. While the use of antimicrobials has proved a huge boost to the livestock industry, in terms of faster growth in animals, the concern is that higher concentration of AMR in the food chain risks diminishing how effective antimicrobials will be in the human population.

Antimicrobials are agents used to kill unwanted microorganisms by targeting the ones that may pose a threat to humans and animals. These include antibiotics and antibacterial medication for bacterial infections, antifungal medication for fungal infections and antiparasitic medication for parasites.   

Resistance, or AMR, develops when antimicrobials are not used for their full course, so the weakest strains of bacteria are killed but the strongest ones survive. This means that the strongest strains then have the chance to adapt to the environment. Over time these variant alleles become more common in the population and the antimicrobials will become ineffective.  

As Shown in the graph below[2]from 2015 to 2019, antimicrobial use in farming has actually decreased by 45% but since then has picked up again despite the issue becoming more widespread.

Antimicrobials are used often in the production of both meat and animal product farming such as dairy cows, with two main objectives: promoting growth and preventing disease.  

The prevention of diseases in livestock are less of a contested issue as it is well understood that livestock share living spaces, food, and water. Generally, if one animal contracts a disease then the whole flock is at risk due to the proximity. Antimicrobials can help break this link.  

However, the WHO has a strong viewpoint with antimicrobials as a growth agent.[3]As stated in their guidelines to reducing antimicrobial resistance, the organization believes that ‘antimicrobials should not be used in the absence of disease in animals.’[4]This happens by helping convert the feed that the farmers are giving to their livestock into muscle to cause more rapid growth. The quicker the animal meets slaughter weight the quicker they can send them off to an abattoir and get the profit. For example, a 44kg lamb produces 25kg of meat which is the heaviest a lamb can be so farmers want their lambs to reach 44kg so that they can get the most money from each lamb.  

Image via Pixabay

Over 400,000 people die each year from foodborne diseases each year. With the rise in antimicrobial resistance, these diseases will start to become untreatable. The most common pathogens transmitted through livestock and food to humans are non-typhoidal Salmonella (NTS), Campylobacter, and toxigenic Escherichia coli.  Livestock contain all of these pathogens and so they can spread easily.

The WHO have been addressing AMR since 2001 and are advocating for it to become a more acknowledged issue. In some countries, the use of antimicrobials is already controlled. The US Food and Drugs Association (FDA) has been acting on this matter since 2014 due to the risk on human health.

Antimicrobial Resistance is a contested issue because, as much as AMR is a problem that has a variety of governing bodies backing it, there will always be the point that farmers rely on their livestock for their livelihoods meaning they are often driven by profit to ensure income. Antimicrobial Resistance has always been hard to collect evidence on, so this makes it harder to prove to these farmers that it is a growing issue.


References


[1] World Health Organization, WHO guidelines on use of medically important antimicrobials in food producing animals, 7 November 2017, https://www.who.int/foodsafety/areas_work/antimicrobial-resistance/cia_guidelines/en/
Accessed:  24th April 2021

[2] World Health Organization, Antimicrobial Resistance in in the food chain, November 2017, https://www.who.int/foodsafety/areas_work/antimicrobial-resistance/amrfoodchain/en/
Accessed:  24th April 2021

[3] World Health Organization, Ten threats to global health in 2019, https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019 Accessed:  24th April 2021

[4] Farm Antibiotics – Presenting the Facts, Progress across UK farming, https://www.farmantibiotics.org/progress-updates/progress-across-farming/
Accessed:  24th April 2021

Impact study: the spread of imported disease in Australia and New Zealand

Sophia (Year 13) looks at how European colonialism spread disease to Australia and New Zealand.

Although the tragedies brought by actions of colonisers such as slavery, wars and other abysmal treatment of native populations caused many deaths, one of the biggest killers of this action was the introduction of new diseases to which natives had no immunity due to their previous isolation from the European invaders.

Image from Pexels

Between 1200 and 1500 Europe itself was suffering several pandemics due to the growth of unsanitary cities, creating the perfect environment for infection, and also increasing contact with the Old World, such as through Mongol and Turkish invasions, which exposed Europe to major disease outbreaks. For example, between 1346-51, the Black Death killed off about a third of Europe’s population. As relatively disease-hardened populations in Europe emerged from this, although local epidemics emerged after 1500, none were really as bad as the years before it, rather such epidemics after this date were in colonised nations. Here I will focus on the colonisation of Australia and New Zealand, with different native peoples (the Aborigines and the Maoris) and with different effects of diseases.

New Zealand

Imported diseases began to impact many Maori from the 1790s. These diseases were those such as viral dysentery, influenza, whooping cough, measles, typhoid, venereal diseases, and the various forms of tuberculosis. Missionaries and observers reported massive death rates and plummeting birth rates. However, unlike the Americas and Australia, there is a big chance that the deaths as a result of foreign disease are widely exaggerated.

Rather, such exaggeration labelled the Maori as a dying race (a view which persisted to 1930), which helped to project the British Empire into New Zealand in 1840. One of the reasons for which the effect of disease was probably the smallest was simply the distance from Europe to New Zealand; it was a natural quarantine. The trip took 4 months or more, meaning that the sick either died or recovered; either way they were often no longer infectious on arrival. Therefore, the most pernicious European diseases – malaria, bubonic plague, smallpox, yellow fever, typhus and cholera – did not manage to transfer to New Zealand.

Image by Dan Whitfield via Pexels

Another factor which fostered the gross magnification of the demise of the Maori was the comparison in birth rates; missionary families were extremely large – the fourteen couples who went to New Zealand before 1833 had 114 children. Therefore, it was easy to amplify the decline in Maori birth rates into something far more serious than it was. The population of Maori on contact with the Europeans are very unreliable and, in most cases, wild guesses, and also allow for the misjudgement of the effect of the disease. For example, one estimate for 1769 based upon archaeological science gives an estimated pre-contact birth rate of 37 per thousand per year, and a death rate of 39[1], obviously impossible given that it leaves the Maori population in the minus-thousands. However, more moderate calculations suggest an average decline of 0.3% per year between 1769 and 1858[2]. Therefore, although the Maori population somewhat suffered as a result of these diseases, there is a tendency to exaggerate this, to portray them as ‘weaker’ peoples, and a dying race, allowing for easier colonisation.

Australia

Although Australia was initially discovered by the Dutch, it was a fleet of British ships which arrived at Botany Bay in January 1788 to establish a penal colony[3].  European disease spread to areas of Australia, even before Europeans had reached those parts. For example, there was a smallpox epidemic near Sydney in 1789, wiping out around half of the Aborigines there.[4] 

Photo by Damon Hall from Pexels

Some historians claim that this was acquired through contact with Indonesian fishermen in the far north, which then spread, and others argue that it is likely that the outbreak was a deliberate act by British marines when they ran out of ammunition and needed to expand their settlement. Indeed, unfortunately colonial thinking at the time placed Europeans as the ‘superior race’; a book written by William Westgarth in 1864 on the colony of Victoria included: ‘the case of the Aborigines of Victoria confirms…it would seem almost an immutable law of nature that such inferior dark races should disappear’. Therefore, as with New Zealand, description of the natives as a ‘dying race’ was an important tool for colonisation, meaning purposeful introduction and spread of some diseases is not too hard to believe.

Smallpox spread between Aboriginal communities, reappearing in 1829-30; according to one record killing 40-60% of the Aboriginal population.[5]  In addition, during the mid-late 19th century, many Aborigines in southern Australia were forced to move to reserves; the nature of many of these institutions enabled disease to spread quickly and many even began to close down as their populations fell.

Conclusion

Although one must be wary of statistics about native mortality rates in both countries, given the European tendency to exacerbate the decline in native populations, it is fair to say that the decline in Aboriginal populations was much higher than that of the Maori in New Zealand, although wars also massively contributed to this.

While roughly 16.5% of the New Zealand population is Maori, only 3.3% of Australians are aboriginal, and it is safe to say that disease influenced this to some extent. So why was there such a difference between the effects of diseases in these countries, seemingly close together and both colonised by the British? A very large reason was smallpox; this was by far the biggest killer in Australia, but never reached New Zealand. The nature of the existing native communities was also important; there were 200-300 different Aboriginal nations in Australia, all with different languages, but the Maori were far more united, and often seen to be a more ‘advanced’ society, and therefore were never forcibly placed into reserves; which is where a lot of the spread of disease took place.

In addition, events in New Zealand occurred much later than Australia, after slavery had been outlawed, meaning there was a slightly more humanitarian approach, and there is less evidence to suggest purposeful extermination of the Maori. This is not to discount any injustices suffered by the Maori; indeed, many did die from European disease, and in both cases the native populations were treated appallingly and were alienated from their land.

The influence of European disease was overwhelmingly more powerful in Australia. However, one must approach statistics about the effect of disease on native peoples with caution, as Europeans tended to exaggerate this area to portray such peoples as ‘dying races’, a device often used to support colonisation.


Bibliography

Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991

James Belich, Making Peoples (New Zealand: Penguin Books), 1996

John H. Chambers, A Traveller’s History of New Zealand and the South Pacific Islands (Great Britain: Phoenix in association with the Windrush Press), 2003


[1] cited in Ian Pool, Te Iwi Maori (New Zealand: Oxford University Press), 1991, 35

[2] Ibid, 56

[3] wikipedia cites Lewis, Balderstone and Bowan (2006), 25

[4] Judy Campbell, Invisible Invaders: Smallpox and other Diseases in Aboriginal Australia 1780-1880 (Australia: Melbourne University Press), 2002, 55

[5] wikipedia cites Richard Broome, Arriving (1984), 27

How are organoids going to change biomedical research?

Microscope

Kate in Year 13 explores how organoids are going to contribute to biomedical research. 

At the moment, biomedical research is almost exclusively carried out in animal models. Although this has led to a better understanding of many fundamental biological processes, it has left gaps in our understanding of human specific development. In addition to this, the variability of human individuals is in sharp contrast to inbred animal models, leading to a deficiency in our knowledge about population diversity.

These limitations have forced scientists to invent a new way of looking at and understanding how the human body works; their conclusions were organoids.

An Organoid (Wikipedia)

Organoids are a miniaturised and simplified version of an organ produced in vitro in 3D which shows realistic micro-anatomy. They originate from renewable tissue sources that self-organise in culture to acquire in vivo-like organ complexity. There are potentially as many types of organoids as there are different tissues and organs in the body. This provides many opportunities such as allowing scientists to study mechanisms of disease acting within human tissues, generating knowledge applicable to preclinical studies as well as being able to offer the possibility of studying human tissues at the same if not higher level of scientific scrutiny, reproducibility and depth of analysis that has been possible only with nonhuman model organisms.

Organoids are going to revolutionise drug discovery and accelerate the process of bringing much needed drugs to reality. Nowadays, the process averages around 20 years from conception to reality. This is a lengthy process mainly due to the fact that the pharmaceutical industry has relied on animal models and human cell lines that have little resemblance to normal or diseased tissue – possibly one of the reasons behind the high failure rate of clinical trials adding to the high cost of drug discovery – an average of $2 billion for each new drug that reaches the pharmacy.

Organoids can help this development by using human cells instead of animal cells due to the improved compatibility, making it quicker and more efficient. Organoids are also able to provide a better understanding of human development.

Organoid graph
Above: Uses of organoids from https://blog.crownbio.com/key-organoid-applications

The human brain, especially the neocortex (which is the part of the mammalian brain involved in higher-order brain functions such as sensory perception, cognition, spatial reasoning and language), has evolved to be disproportionally larger compared with that of other species. A better understanding of this species-dependant difference through brain organoids will help us gain more knowledge about the mechanisms that make humans unique, and may aid the translation of findings made in animal models into therapeutic strategies answering the question what makes humans human.

Organoids are the future of biomedical research providing the potential to study human development and model disease processes with the same scrutiny and depth of analysis customary for research with non-human model organisms. Resembling the complexity of the actual tissue or organ, patient derived human organoid studies will accelerate medical research and generate knowledge about human development which is going to dramatically change the way we are going to study biology in the future.

Invention through desperation – military medical advancements

Military

Jessica, Year 13, explores military medical advancements in recent conflicts, discussing their impact and whether the nature of war acts as an inspiration for innovation.

In 2001, the conflict in Afghanistan began, continuing until a majority of British troops withdrew in the final months of 2014. During these years, 6,386 British personnel were injured, with 28 fatalities, leaving the survival rate at 99.6%.

This was unheard of in previous wars and a major success story for military medicine. However, the injuries and trauma to the soldiers during this period of time increasingly involved haemorrhaging and amputations due to gunshot wounds and IEDs (also known as improvised explosive devices – a type of unconventional crude homemade bomb). These IEDs cause extensive blood loss which has been attributed to 50% of combat deaths since World War Two. In order for these soldiers to survive, a change had to be made in the form of military medicine to preserve life and limb. There are three major advancements in military trauma medicine which all arose from the need to problem-solve solutions to the new injuries personnel and the medics were now witnessing.

The first is haemostatic dressings. During the period of the Afghanistan conflict, two new dressings were developed: XSTAT and QuickClot powder which contain components such as fibrinogen and thrombin catalysing the natural coagulation response. XSTAT uses 92 medical sponges in a pocket-sized injector to pack an open wound and halt bleeding within fifteen seconds. XSTAT increases the chance of survival and holds pressure until the patient can reach a medical centre. They also contain a molecule which is visible on an X-ray to ensure all sponges are removed later to prevent infection.

Secondly, there was a development in the traditional tourniquet. A tourniquet is a constricting or compressing device used to control venous and arterial blood flow to a portion of an extremity for a period of time. This is possible because it creates pressure equal to or higher than the patient’s systolic blood pressure. The single hand tie tourniquet is a development from the original tourniquet used by army medics which had to be applied by the medic and thus were only carried by them. Without the patient being able to apply their own tourniquet, crucial time and blood was lost whilst the medic reached the injured individual, reducing their chance of survival as well as increasing the complexity of their treatment and injuries. This is when the Clinical Application Tourniquet (CAT) was developed and introduced into the US Army in 2005. It was the first single-hand tie tourniquet, allowing the soldiers to treat their own injuries immediately until the medic could attend and provide more advanced care. The tourniquet distributes pressure over a greater area which is advantageous because it reduces the underlying tissue and nerve damage, preventing it from becoming ischemic, a deficient supply of blood, whilst remaining effective. This decrease in time before a tourniquet is used has decreased the mortality rate due to haemorrhaging by 85%.

A third category of advancements is in the use of blood and the way it is transported. Blood and blood products, such as platelets, are crucial in the treatment of haemorrhaging and amputations. However, in order for it to be viable for transfusion, it must be maintained in a cool, constant environment, far from the natural one in Afghanistan. This was previously a significant disadvantage and contributed to the low survival rates for haemorrhaging but improved with the development of the blood container. The Golden-Hour mobile blood container stores up to four units of blood and platelets at[1]the required temperature of six and two degrees Celsius respectively, for 72 hours without electricity, batteries or ice to aid emergency medics. Crucially, this enabled blood to be brought forward to the battlefield rather than stored at the field hospital.

The environment of the military and the nature of its role means that trauma medicine needs to evolve to deal with the style of injuries it is experiencing: invention through desperation. However, it is important that the care not only reflects the immediate treatment of the patient but also considers their long-term care to ensure they can achieve a high quality of life post-conflict.

What would happen if there was no stigma around mental illness?

Mental Illness

Emily, Year 12, explores why there is a stigma around mental illnesses, how we can get rid of this stigma, and what effect the stigma has on society.

Mental illness is not just one disorder – and many people know that – but what they don’t understand is quite how expansive the list of disorders is. As young girls, we are taught about anxiety, body dysmorphic disorder, depression, addiction, stress, and self-harm but the likelihood is that we know – from personal experience, through friends, family or even social media – that many more mental illnesses exist. For example: bipolar disorder, obsessive-compulsive disorder, schizophrenia, autism and ADHD. Chances are, we all know someone with mental illness whether we know or not – the majority of the time these people function the same way that people with no mental illness do. So why is there such a stigma around mental illness and how can we get rid of the stigma?
When the AIDS epidemic started in the early 1980s, the disease was only affecting minority groups of people who already faced criticism. The disease only furthered this and made the patients virtual pariahs until advocacy groups and communities protested to expand awareness and pressured the U.S. government to fund research for the disease and its cure. In only seven years, scientists were able to: identify that the cause of AIDS was the Human immunodeficiency virus (HIV), create the ELISA test to detect HIV in the blood and establish azidothymidine (AZT) as the first antiretroviral drug to help those suffering from HIV/AIDS. This is a prime example of how public knowledge can lead to science pushing the boundaries of their knowledge and finding treatments. Along with treatments eliminating symptoms, they also eliminate the stigma as more and more people are learning about the disease. So why can’t this be the case for mental illness?

In a time when science wasn’t breaking new boundaries every day, and knowledge wasn’t being distributed properly, it is easy to see why those with such complicated illnesses were feared and had such a stigma surrounding them. However, now when the greatest barrier is access to treatments and not the science, and the education about the subject is as high as it has ever been, it is hard to see why there is still such shame in having these illnesses.

But what if there was no stigma? We would have early identification and intervention in the form of screening mechanisms in primary care settings such as GP, paediatric, obstetrics, and gynaecological clinics and offices as well as schools and universities. The goal would be to screen those who are at risk for or are having symptoms of mental illness and engage the patients in self-care and treatment before the illness severely affects their brains, and lives. We would also have community-based comprehensive care for those who are in more advanced stages of illness. This will support people who are unable to care for themselves and who may otherwise end up homeless, in jail or in mental hospitals.
For example: victims of trauma would be treated for PTSD along with any physical injuries while in the hospital to target PTSD before any symptoms started occurring and the patient could hurt themselves or others; first responders would have preventative and decompression treatments routinely administered to treat PTSD before waiting to see who may or may not show symptoms; mothers would be treated for pre/post-partum depression as a part of pre/post-natal check-ups instead of waiting and potentially harming themselves or their baby. Children with learning disabilities would be identified early on so they could get cognitive training, and emotional support to prevent counterproductive frustration due to something they cannot control.

Medical economists have shown that this method of proactive mental healthcare will actually reduce the cost of delivering it. It will also relieve emotional stress (for the patient and their family), financial burden for treatment, and will reduce the occurrence of many of the very prevalent social problems. We all know about the many mass shootings that occur regularly and a great deal of these crimes have been perpetrated by young males who have an untreated mental illness which have presented symptoms for long before the crime was committed – not that I am excusing their behaviour in any way.

As a worldwide community, we must be able to recognise mental illness for what it is – a medical condition that can be treated, be that with behavioural or cognitive therapy or with medication. In order to dissolve the stigma, we must be involved, ask questions, be kind, be compassionate, and make it our own business. There is only so much science can do if people are not willing to take the help they are being given – they need to want to get better. The only way this will happen is if we all help to make it known that having a mental illness is not a bad thing, and that it is easily treatable, and that they are no different from anyone else.

The Brain Chemistry of Eating Disorders

Jo, Year 13, explores what is happening chemically inside the brains of those suffering from eating disorders and shows how important this science is to understanding these mental health conditions.

The definition of an eating disorder is any range of psychological disorders characterised by abnormal or disturbed eating habits. Anorexia is defined as a lack or loss of appetite for food and an emotional disorder characterised by an obsessive desire to lose weight by refusing to eat. Bulimia is defined as an emotional disorder characterised by a distorted body image and an obsessive desire to lose weight, in which bouts of extreme overeating are followed by fasting, self-induced vomiting or purging. Anorexia and bulimia are often chronic and relapsing disorders and anorexia has the highest death rate of any psychiatric disorder. Individuals with anorexia and bulimia are consistently characterised by perfectionism, obsessive-compulsiveness, and dysphoric mood.

Dopamine and serotonin function are integral to both of these conditions; how does brain chemistry enable us to understand what causes anorexia and bulimia?

Dopamine

Dopamine is a compound present in the body as a neurotransmitter and is primarily responsible for pleasure and reward and in turn influences our motivation and attention. It has been implicated in the symptom pattern of individuals with anorexia, specifically related to the mechanisms of reinforcement and reward in engaging in anorexic behaviours, such as restricting food intake. Dysfunction of the dopamine system contributes to characteristic traits and behaviours of individuals with anorexia which includes compulsive exercise and pursuit of weight loss.

In people suffering from anorexia dopamine levels are stimulated by restricting to the point of starving. People feel ‘rewarded’ by severely reducing their calorie intake and in the early stages of anorexia the more dopamine that is released the more rewarded they feel and the more reinforced restricting behaviour becomes. Bulimia involves dopamine serving as the ‘reward’ and ‘feel good’ chemical released in the brain when overeating. Dopamine ‘rushes’ affect people with anorexia and bulimia, but for people with anorexia starving releases dopamine, whereas for people with bulimia binge eating releases dopamine.

Serotonin

Serotonin is responsible for feelings of happiness and calm – too much serotonin can produce anxiety, while too-little may result in feelings of sadness and depression. Evidence suggests that altered brain serotonin function contributes to dysregulation of appetite, mood, and impulse control in anorexia and bulimia. High levels of serotonin may result in heightened satiety, which means it is easier to feel full. Starvation and extreme weight loss decrease levels of serotonin in the brain. This results in temporary alleviation from negative feelings and emotional disturbance which reinforces anorexic symptoms.

Tryptophan is an essential amino acid found in the diet and is the precursor of serotonin, which means that it is the molecule required to make serotonin. Theoretically, binging behaviour is consistent with reduced serotonin function while anorexia is consistent with increased serotonin activity. So decreased tryptophan levels in the brain, and therefore decreased serotonin, increases bulimic urges.

Conclusions

Distorted body image is another key concept to understand when discussing eating disorders. The area of the brain known as the insula is important for appetite regulation and also interceptive awareness, which is the ability to perceive signals from the body like touch, pain, and hunger. Chemical dysfunction in the insula, a structure in the brain that integrates the mind and body, may lead to distorted body image, which is a key feature of anorexia. Some research suggests that some of the problems people with anorexia have regarding body image distortion can be related to alterations of interceptive awareness. This could explain why a person recovering from anorexia can draw a self-portrait of their body image that is typically 3x its actual size. Prolonged untreated symptoms appear to reinforce the chemical and structural abnormalities in the brains seen in those diagnosed with anorexia and bulimia.

Therefore, in order to not only understand and but also treat both anorexia and bulimia, it is central to look at the brain chemistry behind these disorders in order to better understand how to go about successfully treating them.

 

How fungi help trees to communicate

Freya, Year 13, explores how trees are able to communicate and help each other using a network of fungi in the soil.

Underneath your feet there could be possibly 300 miles of fungi stacked in the soil. This special network of fungi , called the mycorrhizal network , brings together fungi and trees in a symbiotic relationship which helps trees to communicate, coined the ‘wood wide web’. You may have unknowingly seen mycorrhizae before; it is long and white and looks a bit like silly string.

When a tree seed is germinating, its roots grow towards the fungi in the soil. In return for nutrients and water from the fungi, trees send sugars down to them. This is of important value to fungi as they cannot photosynthesise (and so make their own sugars, which are needed for growth). Not only does the network connect the fungi and trees, but also the different trees in a given area. All the trees whose roots grow into mycorrhizae are linked in a network. This allows the trees to communicate.

Using the mycorrhizal network, a tree that has been taken over by a certain pest can send danger signals to other trees. When other trees pick up this signal, they release their own chemicals above ground to attract the predators of the pest towards them, thereby reducing the population of pests.

Amazingly, when a tree ‘knows’ it’s dying it will do everything it can to aid the survival of the trees around it. Researchers noted that as an injured tree was dying, it sent all of its carbon down through its roots into the mycorrhizal network so that it could be absorbed by neighbouring trees. In doing so, these neighbouring trees were strengthened.

The driving researcher behind this work, Suzanne Simard, found that trees will help each other out when they’re in a bit of shade. She used carbon -14 trackers to monitor the movement of carbon from one tree to another. She found that the trees that grew in more light would send more carbon to the trees in shade, allowing them to photosynthesise more and so helping them provide food for themselves. At times when one tree had lost its leaves and so couldn’t photosynthesise as much, more carbon was sent to it from evergreen trees.

This discovery could be used in the future to reduce the disastrous effects of deforestation. If loggers keep the network of fungi intact, with many of the oldest trees still present, new trees planted will be able to utilise and reuse carbon more efficiently thanks to the wood wide web.

Nanotechnology and its future in medicine – 07/09/18

Maya (Year 11), discusses the uses of nanotechnology in medicine, thinking about how far it has come and helped doctors. She also considers the dangerous aspects of using such small technology and the future benefits it may bring.

Technology in medicine has come far and with it the introduction of nanotechnology. Nanotechnology is the action of manipulating structures and properties at an atomic and molecular level as the technology is so small; it being one-billionth of a metre. This technology has many uses such as electronics, energy production and medicine and is useful in its diverse application. Nanotechnology is useful in medicine because of its size and how it interacts with biological molecules of the same proportion or larger. It is a valuable new tool that is being used for research and for combatting various diseases.

In medicine, nanotechnology is already being used in a wide variety of areas, the principle area being cancer treatment. In 2006 a report issued by NanoBiotech Pharma stated that developments related to nanotechnology would mostly be focused on cancer treatments. Thus, drugs such as Doxil, used to treat ovarian cancer will use nanotechnology to evade and surpass the possible effects of the immune system enabling drugs to be delivered to the disease-specific areas of the body. Nanotechnology is also helping in neuroscience where European researchers are currently using the technology to carry out electrical activity across dead brain tissue left behind by strokes and illnesses. The initial research was carried out to get a more in-depth analysis of the brain and to create more bio-compatible grids (a piece of technology that surgeons place in the brain to find where a seizure has taken place). Thus, it is more sophisticated than previous technologies which, when implanted, will not cause as much damage to existing brain tissue.

Beyond help in combatting cancer and research, nanotechnology is used in many areas in medicine from appetite control to medical tools, bone replacement and even hormone therapy. Nanotechnology is advancing all areas of medicine with Nano-sized particles enhancing new bone growth and additionally, there are even wound dressings that contain Nano-particles that allow for powerful microbial resistance. It is with these new developments that we are revolutionising the field of medicine, and with more advancements, we will be able to treat diseases as soon as they are detected.

Scientists are hoping that in the future nanotechnology can be used even further to stop chemotherapy altogether; fighting cancer by using gold and silica particles combined with nanotechnology to bind with the mutated cells in the body and then use infra-red lasers to heat up the gold particles and kill the tumour cells. This application would be beneficial as it would reduce the risk of surrounding cells being damaged as the laser would not affect them as much as the chemotherapy would.

In other areas, nanotechnology is further developing with diagnostics and medical data collection. This means that by using this technology, doctors would be able to look for the damaged genes that are associated with particular cancers and screen the tumour tissue faster and earlier than before. This process involves the Nano-scale devices being distributed through the body to detect chemical changes. There is also an external scan by use of quantum dots on the DNA of a patient which is then sequenced to check if they carry a particular debilitating genome, therefore providing a quicker and easier method for doctors to check in detail if a patient has contracted any illnesses or diseases. Furthermore, doctors will be able to gain a further in-depth analysis and understanding of the body by use of nanotechnology which surpasses the information found from x-rays and scans.

While this is a great start for nanotechnology, there is still little known about how some of the technology might affect the body. Insoluble nanotechnology for example, could have a high risk of building up in organs as they cannot diffuse into the bloodstream. Or as the nanoparticles are so small, there is no controlling where they could go, which might lead to Nano-particles entering cells and even their nuclei, which could be very dangerous for the patient. The science and technology committee from the House of Lords have reported concerns about nanotechnology on human health, stating that sufficient research has not been conducted on “understanding the behaviour and toxicology of nanomaterials” and it has not been given enough priority especially with the speed at which nanotechnology is being produced.

Nanotechnology is advancing medical treatment at a rapid rate, with new innovative technologies approved each year to help combat illnesses and diseases. Whilst more research needs to be conducted, the application of Nano-medicine will provide a platform of projected benefits that has potential to be valuable. Overall with the great burden that conditions like cancer, Alzheimer’s, HIV and cardiovascular diseases impose on the current healthcare systems, nano-technology will revolutionise healthcare with its advances techniques in the future as it progresses.

@Biology_WHS 

Critical Thinking: “the important thing is not to stop questioning.” – Albert Einstein

Richard Gale, teacher of Biology at WHS, looks at the value of critical thinking and how we can use this to help make logical and well-structured arguments.

At some point we all accept a fact or an opinion without challenging it, especially if we deem the person telling us the fact or opinion to be in a position of authority.

Challenging or questioning these people can seem daunting and rude, or at worst we could appear ignorant or stupid. However, if we never challenged or questioned ideas or perceived facts then the world would still be considered to be flat, and we would not have the theories of relativity or evolution.

This is what Einstein is getting at, that all ideas and preconceived facts should be questioned otherwise society will stagnate and no longer advance in any field of study. This process of constantly asking questions and challenging ideas is known as critical thinking.

It is said that someone who is a critical thinker will identify, analyse, evaluate and solve problems systematically rather than by intuition or instinct; almost a list of higher order thinking skills from Bloom’s taxonomy. The reason for placing critical thinking as a key higher order skill is because, as Paul and Elder (2007) noted “much of our thinking, left to itself, is biased, distorted, partial, uninformed or down-right prejudiced.  Yet the quality of our life and that of which we produce, make, or build depends precisely on the quality of our thought.”

In essence, critical thinking requires you to use your ability to reason. It is about being an active learner rather than a passive recipient of information by asking questions to understand the links that exist between different topics. It requires learners to weigh up and determine the importance and relevance of evidence and arguments, identifying arguments that are weak and those that are stronger; to build and appraise their own arguments, identify inconsistences and errors in arguments and reasoning, doing all of this in a systemic and consistent way. Then they should reflect on the justification of their own assumptions, beliefs and values. As Aristotle put it “it is the mark of an educated mind to be able to entertain a thought without accepting it.”

Critical thinkers rigorously question ideas and assumptions rather than accepting them at face value. They will always seek to determine whether the ideas, arguments and findings represent the entire picture and are open to finding that they do not. In principle anyone stating a fact or an opinion, and I am definitely including myself here as a teacher, should be able to reason why they hold that fact or opinion when asked questions and should be able to convince a class or an individual that those ideas have merit. Equally, as I know my pupils would attest too, pupils should be able to reason why they hold their opinions or ideas when questioned. Whilst this may seem daunting and at times a bit cruel, being able to think critically has become a very important skill with the onset of the new A levels.

In Biology, under the reformed linear A level, there has been in increase in the percent of marks awarded for higher order thinking skills, termed A02 and A03. A02 is to ‘apply knowledge and understanding of scientific ideas, processes, techniques and procedures’ whereas A03 is ‘analyse, interpret and evaluate scientific information, ideas and evidence, including in relation to issues.’ This is weighted between 40-45% of marks for A02 and 25-30% for A03 skills of the overall percentage across the three papers. The pupils taking the exams are expected to critically interpret data and theories, as well as analysing and interpreting the information they have learnt in completely novel situations. The following quote from Carl Segan is now more significant as knowing facts is no longer enough for pupils to succeed: “knowing a great deal is not the same as being smart; intelligence is not information alone but also judgment, the manner in which information is collected and used.”

Thankfully, we can develop and train ourselves – and others – to be critical thinkers. There are a plethora of guides and talks on how to we can develop our skills as critical thinkers, and choosing which one is most useful is tricky and to an extent futile as they all repeat the same basic principles but with different language and animations. I have tried to summarise these as follows:

  1. Always ask questions of the fact or information provided and keep going until you are satisfied that the idea has been explained fully.
  2. Evaluate the evidence given to support the idea or fact; often miss-conceptions are based on poor data or interpretations. What is the motive of the source of the information, is there bias present? Do your research and find the arguments for and against, which is stronger and why?
  3. Finally, do not assume you are right, remember we ourselves have bias and we should challenge our own assumptions. What is my truth? What are the truths of others?

We can practise these skills when we are in any lesson or lecture, as well as when we are reading, to help develop a deeper understanding of a text. Evaluating an argument requires us to think if the argument is supported by enough strong evidence.

Critical thinking skills can be practised at home and in everyday life by asking people to provide a reason for a statement. This can be done as they make it or by playing games, such as you have to swap three items you current have for three things you want, and then rationalising each choice. You can even engage in a bit of family desert island discs, taking it in turn to practise your Socratitic questioning (treat each answer with a follow up question).

There are a few pitfalls to consider when engaging with critical thinking; the first of these is ignorant certainty. This is the belief that there are definite correct answers to all questions. Remember that all current ideas and facts our just our best interpretation of the best information or data we currently have to hand and all of them are subject to re-evaluation and questioning. The next one is more relevant to critical thinking and is naïve relativism – the belief that all arguments are equal. While we should consider all arguments we cannot forget that some arguments are stronger than others and some are indeed wrong. Even Isaac Newton, genius that he was, believed that alchemy was a legitimate pursuit.

Critical thinking is not easy; you have to be prepared to let go of your own beliefs and accept new information. Doing so is uncomfortable, as we base ourselves on our beliefs but ultimately it is interesting and rewarding. As you explore your own beliefs and those of others through questioning, evaluating, researching and reviewing, know that this is enriching your ability to form arguments and enhancing your opinions and thoughts. You do not know what you will discover and where your adventure will take you, but it will take you nearer to the truth, whatever that might be. Whilst on your journey of lifelong learning remember to “think for yourselves and let others enjoy the privilege to do so, too” (Voltaire).

Follow @STEAM_WHS and @Biology_WHS on Twitter.

Hotspotting: the conservation strategy to save our wildlife?

Globe

Alex (Year 11) investigates whether the strategy of hotspot conservation is beneficial to reducing mass extinction rates, or if this strategy is not all it claims to be.

Back in 2007, Professor Norman Myers was named the Time Magazine Hero of the Environment for his work in conservation with relation to biodiversity hotspots. He first came up with his concept of hotspot conservation in 1988, when he expressed his fears that ‘the number of species threatened with extinction far outstrips available conservation resources’. The main idea was that he would identify hotspots for biodiversity around the world, concentrating conservation efforts there and saving the most species possible in this way.

Myers’ fears are even more relevant now than 30 years ago. According to scientific estimates, dozens of species are becoming extinct daily leading to the worst epidemic of extinction since the death of the dinosaurs, 65 million years ago. And this is not as naturally occurring as a giant meteor colliding with the Earth – 99% of the IUCN Red List of Threatened Species are at risk from human activities such as ocean pollution and loss of habitat due to deforestation amongst other things. It is therefore crucial that we act now to adopt a range of conservation strategies to give our ecosystems a chance at survival for future generations.

To become accepted as a hotspot, a region must meet two criteria: firstly it must contain a minimum of 1,500 endemic (native or restricted to a certain area) plant species, and secondly it must have lost at least 70% of its original vegetation. Following these rules, 35 areas around the world ranging from the Tropical Andes in South America to more than 7,100 islands in the Philippines and all of New Zealand and Madagascar, were identified as hotspots. These areas cover only 2.3% of Earth’s total land surface but contain more than 50% of the world’s endemic plant species and 43% of endemic terrestrial bird, mammal, reptile and amphibian species, making them crucial to the world’s biodiversity.

This concept has been hailed as a work of genius by conservationists and has consequently been adopted by many conservation agencies such as Conservation International – who believe that success in conserving these areas and their endemic species will have ‘an enormous impact in securing our global biodiversity’.

The principal barrier to all conservation efforts is funding, as buying territories and caring for them costs a lot of money, which is primarily raised from businesses, governments and individual donors. Most of this funding is raised through campaigns focused on charismatic megafauna such as the penguin or the snow leopard. These types of campaigns motivate people as they feel a closer connection to these animals and they seem to really be making a difference in conserving these species. When conservation is done on a larger, regional level, there is less of the gratification that comes along with donating money as there is less control, felt by the donors, over the work done for conservation. Through the identification of 35 specific areas to concentrate funds towards, this reconnects the public, as well as larger companies and local governmental bodies, to the projects, thereby encouraging more donations. It is for this reason that hotspot conservation has received £740 million, the largest amount ever assigned to a single conservation strategy.

Although the 35 areas identified are relatively widespread and well-funded for their conservation efforts, this strategy has been criticised for its neglect of other crucial ecosystems. First of all, there are no hotspots in northern Europe and many other areas around the world, neglecting many species of both flora and fauna. Also, as the criteria for classification as a hotspot are with reference to endemic plant species, many species of fauna are neglected, from insects to large and endangered species such as elephants, rhinos, bears, and wolves. Furthermore, areas referred to as ‘coldspots’ are ignored. This could lead to the collapse of entire ecosystems following the extinction of key species.

Another major issue with this strategy is that terrestrial environments only make up around 29.2% of the earth’s surface area. The other 70.8% is covered in very diverse (but also very threatened) oceans and seas. Marine environments are overlooked by hotspot conservationists as they rarely have 1500 endemic plant species, as deep oceans with very little light are not the ideal environmental environment for plant growth, and species floating on the top are rarely confined to one specific area, making them not endemic.

So, if even the more successful strategies for conservation are so flawed, is there any hope for the future? I think that yes, there is. Although there is no way to save all the species on earth, identifying crucially important areas to concentrate our efforts on is essential to modern conservation efforts. Hotspot conservation is definitely improving the ecological situation in these 35 areas and so those efforts should be continued, but that doesn’t mean that all conservation efforts should be focussed only on these hotspots. Hotspot conservation should be part of the overall strategy for reduction of mass extinction rates, but it is not the fix-all solution that some claim it is.

Follow @Geography_WHS & @EnviroRep_WHS on Twitter.