Should prisoners on Death Row be accepted as Organ Donors?

Isobel, a Year 10 pupil at WHS, assesses the ethics and logistics of accepting death row prisoners as organ donors.

Disclaimer: This piece is based on the US death row and does not highlight my own views on capital punishment.

From a Utilitarian standpoint, there may appear to be a simple answer to this question: organ donation should be permitted because there is a global shortage of transplantable organs and those in dire health condition are unable to receive the medical care they need. However, as more research is done numerable practical and ethical barriers arise. One country that already utilises organs from death row inmates is China. Reports state that more than 5,000 prisoners are executed in China annually, and organs are harvested for transplantation from suitable prisoners. These prisoners are executed via a temporal gun shot wound and are declared dead secondary to execution. They are not declared brain dead which causes many ethical headaches because the physicians removing the organs are then put in the position of executioner. This brief case study begins to highlight some of the major opposing arguments to organ donation from death row prisoners.

Picture from showing surgery https://pixabay.com

The numerous practical barriers surrounding organ procurement from death row prisoners begin to pile up after closer inspection. The first issue is the low yield of transplantable donor organs from these prisoners due to the potential high likelihood of alcohol or drug abuse. Whilst this is a potential stereotype, these factors can drastically impact the quality of the organs being donated.

For example, alcoholism is the leading cause of liver disease in the US because heavy drinking can cause irreversible cirrhosis. Approximately 10-20% of heavy drinkers develop this disease and it is the ninth leading cause of death in the US, killing around 35,000 people a year. Prisoners in long term facilities will not live on nutrition rich diets, will most likely be malnourished because of the (often) poor-quality food they have consumed and will not get the adequate exercise to build up strong organs such as hearts and lungs. These reasons could also impact the quality of their organs for transplantation.

The second practical barrier preventing condemmed prisoners from being organ donors is the logistics on the day of execution. The surgeon performing the operation cannot kill the patient by removing their organs as it breaches the Hippocratic oath of ‘do no harm’. The patient must already be dead or pronounced brain dead before they are put under general anaesthesia because when the transplant team cross clamps the aorta, resulting in a cardiectomy and takes the patient of the ventilator, they are then declared dead. Many physicians’ groups, including the American Medical Association, have prohibited physician participation in state executions on ethical grounds.

Looking through a utilitarian lense the death of an organ donor means dozens of lives saved and the donation is there simply to help those suffering from end stage organ disease, not for any other ulterior motives. The two documents that set out the rules around organ donation in the US are the National Transplant Act of 1984 and the Uniform Anatomical Gift Act. Neither of these documents explicitly prohibits organ donation by death row inmates which means there is no law preventing it from happening. The National Transplant Act states that organ donation cannot be made for ‘valuable considerations’, including exchange of money, material benefit, or a shortened sentence.  This would not be an issue for death row inmates as they have already been condemmed until the end of their life and they have little access to the wider society.

Christian Longo went public with his idea to donate organs as a condemmed prisoner and joined the organisation G.A.V.E (Gifts of Anatomical Value from Everyone). He came up with the idea himself so there is no fear of coercion and he approached the New York times with his story on a voluntary basis. There have been 14 other publicised instances of death row inmates and their lawyers attempting to seek their respective opportunities to donate their organs. They were denied on the grounds of current knowledge on the matter. As popularity surrounding capital punishment begins to dim the public’s sympathy for those stationed on death row is increasing. The conversation surrounding a prisoner’s ability to choose to have one good action in the world before their execution is becoming ever louder.

When a person in incarcerated many of their free rights no longer apply. This can make the ethical arguments considered in organ donation heightened or just too confusing to comprehend.  Two seemingly opposite arguments: fear of coercion, (insinuates the death row inmates are not being adequately protected) and the intention to preserve the morality of capital punishment (death row inmates rights are given too much protection) begin to represent this.

There is a fine line between coercion and free choice when it is made in a heavily pressurised situation like a prison. The emotional stress on the donor can be intense because of the need to make right. Also, the patient who is accepting the donor should be notified that the organs they are receiving came a person on death row. Those who oppose capital punishment are then forced to choose between their life or their personal morals. Many say that the idea of capital punishment is to achieve retribution and deterrence in society. The action of donation is not consistent with either of these aims. Making a hero of the person at the end of their life could have detrimental impact on the family and friends of the victim to the prisoner’s crime. For many, the death of the perpetrator of their pain can bring closure and end to the cycles of grief. To see them be glorified in their last days could have the opposite effect.

https://pixabay.com

It is important to consider the impact that organ donation from death row prisoners will have on the overall practice. The number of potential organs recovered from condemned prisoners would be small. The conceivable stigma that would be attached to organ donation from its coupling with execution could lead to decreases in donation rates. This may especially be true within certain minority groups.

Any notion that groups of people were receiving increased numbers of death sentences to provide organs for the rest of society would clearly make it difficult to attempt to obtain consent for altruistic donation from these groups.

Overall, the bad outweighs the good so although it may seem like an easy solution to a difficult problem, donation from death row inmates would cause more problems than it could hope to solve.

Immunology: a brief history of vaccines

Sienna (Year 11) looks at the history of immunisation, from variolation to vaccination, exploring some of the topics around this important science.

History of Immunisation:

Variolation: 

While vaccination is considered quite a modern medical procedure, it has its roots in more ancient history. In China there are records of a procedure to combat smallpox as early as the year 1000. This was called variolation and was a procedure where pus was taken from a patient with a mild case of smallpox which was then given to another person. This means the person gets a less dangerous version of smallpox than they may have otherwise, promoting an immuno-response to act as a way of preventing the disease. This method became established around the world and was later seen in the work of Edward Jenner, who is considered the ‘father of vaccinations’, after he used this technique in Africa, England and Turkey in the 1700s.

Later in the 1700s, the USA learned of it from slaves who came inoculated from Africa. Even though a remarkable feat for the time, it wasn’t without risk, as the way the immunity was reached was by direct exposure to the virus, so infected patients could still die from the virus – as is what happened with King George III’s son and countless number of slaves. However, the risk of dying from variolation was far smaller than the risk of catching and dying from smallpox, so variolation was popular despite the risks.

 

Origin of the first widely accepted vaccination: 

Vaccination, as we know it in modern terms, was first established in 1796 by Edward Jenner. He was a scientist and fellow of the Royal Society in London. Seeing how much of a problem smallpox was at that time (and for most of history prior to then), Jenner was interested at innovating the process of variolation to tackle smallpox.

He was inspired by something he heard when he was a child from a dairymaid saying I shall never have smallpox for I have had cowpox. I shall never have an ugly pockmarked face.” This inspired him later in life to carry out an experiment where he inoculated an eight-year-old with cowpox disease. He recorded the boy felt slightly ill for around 10 days after the procedure, but afterwards was completely fine. After being injected with active smallpox material a few months later, the boy did not show any symptoms of the disease; Jenner concluded his experiment had been a success.

After writing up his findings, Jenner decided to name the new procedure vaccination as the Latin for cowpox is ‘vaccinia’. His paper was met with a mixed reaction from the medical community. Despite this, vaccination began gaining popularity due to the activity of other doctors such as Henry Cline, a surgeon whom Jenner had talked closely with.

Due to the success of the procedure, especially compared to variolation, by the turn of the century (just a few short years after Jenner had run his experiment) vaccination could be found in almost all of Europe and was particularly concentrated in England. The success of Jenner’s work is outstanding. By 1840 vaccination had replaced variolation as the main weapon to fight against smallpox so much so that variolation was prohibited by law in British Parliament. The disease that had ripped so mercilessly through the world for centuries was finally declared eradicated in 1977 by the World Health Organisation (WHO) – perhaps more than the deceased Jenner could have ever hoped his discovery would achieve.

Edward Jenner: 

Image via Pexels

Despite undeniably being a force for good in terms of the world, Jenner was also a remarkable person on a slightly smaller scale. Despite low supplies at times, Jenner would send his inoculation to anyone who asked for it – medical associates, friends and family, even strangers. Later in his life, he even set up his ‘Temple of Vaccinia’ in his garden where he vaccinated the poor free of charge. Despite the opportunity, Jenner made no attempt to profit off of his work, rather viewing his invention as a contribution to science and to humanity, and this was perhaps vital for the speed at which the vaccine and vaccination process spread.

Modern Vaccinations: 

Nowadays vaccinations have changed – not in principle but in the nitty-gritty science of them – as we have begun to know more about how our immune system works. Jenner’s inoculant was adapted and changed to suit different diseases, containing either very mild strains of a virus with similar spike proteins, a dead strain of the virus, or even the isolated spike protein, enabling the body to recognise the pathogen without being exposed to the danger of it.

Introducing the body to the same spike proteins found on the harmful pathogen is in essence how vaccination works. The body responds to these spike proteins are foreign and so send phagocytes (a type of white blood cell) to destroy them, and lymphocytes to create antibodies to activate an immune response. This is why a few days after vaccination there may be a feeling of discomfort or slight fever – this is because the body is fighting against those spike proteins.

While the spike proteins are being destroyed, the body creates memory cells. These are the most important part of the vaccination procedure and mean that if the body is exposed to the actual, more dangerous pathogen in the future, the memory cells will recognise the spike protein and the body will have a secondary immune response, so that antibodies are produced in much greater quantity, sooner and more rapidly. Secondary immune responses to diseases are far more effective and often the person will never show any symptoms they have that disease, with the pathogens being destroyed within a matter of days.

Viral Vector Vaccines:

These are an example of exciting advances in vaccination. The way these type of vaccines work, such as the COVID-19 vaccine developed in the UK by Oxford University, is that the DNA from the actual virus is injected into an adenovirus (a deactivated virus that acts as a carrier for the actual virus DNA to our bodies), causing the antigens for actual virus to develop on the adenovirus. These can then trigger a strong immune response from the body without the actual virus itself being introduced into the body. This is an effective way to ensure memory cells to that virus are created, and this attributes to the Oxford vaccines high efficacy reports.

mRNA Vaccines:

The exciting new vaccination adaption is the mRNA material in the vaccine, and this has been used in some of the COVID-19 vaccines. The mRNA essentially is a set of instructions for the body to make the spike protein of the pathogen meaning the body makes the protein rather than it being cultivated in a laboratory and then put into a vaccination, but after that has exactly the same response. This allows the vaccination to be produced quicker and to be more effective. However, due to the newer and more complicated nature of the vaccine, it is more expensive to produce and needs to be stored at very low temperatures due to the mRNAs unstable nature. This can cause logistical issues with storage and distribution and is why the DNA based vaccine has been hailed as the best option for low income developing countries who do not have the facilities to store the mRNA vaccines. DNA vaccines can be stored at fridge temperature as DNA is far more stable than mRNA due to its double helix structure. This novel type of vaccine was developed by two Turkish immigrants living in Germany, who thought outside the box, like Jenner to improve human health in the race against time to find an effective vaccine. They have been enormously successful with the mRNA vaccine displaying 95% effectiveness against COVID-19 seven or more days after the second shot is administered.

Image via Pexels

Controversies of vaccinations:

During this pandemic, there has been wide-spread appreciation of how vital vaccines will be to control the spread of COVID-19. However, the voices of skeptics, often amplified by social media, seem to have found a more prominent platform to spread their opinions. They do not trust vaccination due to a variety of unfounded concerns. One of these is the argument that that the vaccinations are really ways for the government to implant chips into its citizens. Not only does this theory ignore the historic science of vaccination but logistically the needle would need to be far wider and the subsequent puncture wound would be far more noticeable.

The autism study:

Unfortunately, even though an article by Andrew Wakefield in 1998 was quickly shown to be based upon unfounded evidence, it continues to resurface among skeptics in their argument against vaccines, falsely claiming there is a link between autism and the MMR vaccine. Wakefield not only used only 12 children to test his hypothesis, far too small a group to draw up any kind of reliable conclusion, but he was also struck of the UK medical register for this paper. Wakefield’s study was disproven and redacted, and his hypothesis has been disregarded in the medical community through subsequent research and publication. The amplification of this fraudulent study has been cited as a reason for a decline in the uptake of the MMR vaccination and the subsequent small outbreaks of measles.

Development of COVID-19 vaccines:

For some, when they look at the speed with which the Covid-19 vaccine has been developed – under a year compared to more standard research time which can be as much as a decade – they are skeptical.

However, this is not because of cutting corners in the process; rather it is due to the immense amount of funding and equipment being given to scientists, as well as the sheer number of people working on the vaccine, to prioritise its development. In Phase I, II and III human trials are used and are assessed extensively for how the vaccine works in a diverse range of age groups, races, body types and pre-existing health conditions, as well as to accurately measure the exact immune response of the body – the antibodies and cells that have been produced and the efficacy and safety of the drug. This is then tested again by the approval companies – The Medicines and Healthcare Products Regulatory Agency for the UK, the European Medicines Agency for the EU and the Centre for Disease Control for the USA.

The World Health Organisation listed ‘vaccine hesitancy’ as one of the top ten threats to global health in 2019. This will play a crucial role in how quickly life can return to normal following the COVID-19 pandemic. Vaccinations are humans’ biggest weapon against the pandemic; they are, in the words of Sir David Attenborough, ‘a great triumph of medicine’, and although there has been recent news about mutations of the virus, it is important to remember that this is completely to be expected. The recent talk of the South Africa, UK and Brazil mutations have been due to small changes in the spike protein of the virus which have affected the transmissibility of the virus. There are tests currently being run, but early signs show that the vaccines are still effective against the mutation.

Even in the worst-case scenario, the vaccines can be adapted in a matter of weeks or months, and the government is preparing for a situation in which a COVID-19 vaccine has to be given annually to those at high risk, similar to the current flu vaccine. It comes as a relief that finally, in the wake of such a disruptive and terrible pandemic, there is light at the end of the tunnel and a reason to look forward to better days ahead, knowing that this lockdown will be very much so beneficial as every day more people are getting these game changing vaccinations.


Sources:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1200696/

https://www.historyofvaccines.org/timeline/all

https://www.britannica.com/science/variolation

https://www.nhs.uk/news/medication/no-link-between-mmr-and-autism-major-study-finds/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4944327/

https://www.bmj.com/content/371/bmj.m4826

https://www.independent.co.uk/news/uk/home-news/david-attenborough-anti-vax-ignorance-covid-b1797083.html

https://www.nature.com/articles/d41586-020-02989-9

Is geothermal energy the answer to our climate problems?

Lucy in Year 10 looks at issues surrounding climate change and the damage our current ways of living are having on the planet. Might geothermal energy offer the UK, and the world, a solution for us to clean up our act?

We are in the midst of a climate crisis; the UK government has recently made a commitment to achieve net zero emissions by 2050 to help stop further damage to the environment. The burning of fossil fuels to generate power is a significant contributor to the UK’s greenhouse gas emissions, so the use of renewable energy sources is critically important to meeting this commitment to achieve net zero emissions. There are already many established sources of renewable energy, such as wind, solar and tidal power, but geothermal energy might be an unexpected solution to the UK’s problems.

Geothermal energy: a solution to a cleaner future?
Picture from https://www.britannica.com/science/geothermal-energy

Geothermal energy uses the natural heat from within the Earth’s crust to heat water and create steam.  This steam then powers a turbine in a similar way to the production of energy using fossil fuels, with the key exception that the heat comes from the earth instead of from the burning of coal, oil or gas.  So, like other forms of renewable energy, geothermal energy produces far less CO2 than fossil fuels do.

The key advantage geothermal energy offers over many other forms of renewable energy is consistency.  Solar cells and wind turbines rely on climate and weather conditions to operate, which means that the amounts of energy produced varies and can be unreliable.  Geothermal energy doesn’t have that problem. No matter what happens, a geothermal plant will always produce the same amount of energy. The problems caused by inconsistent energy provision have already been seen; only weeks after setting a new wind power generation record, a breezeless day in January 2021 resulted in a shift back to fossil fuelled power and a tenfold surge in spot energy prices.[1]

Geothermal energy is currently in the news due to a recent announcement to build the first ever geothermal plant in the UK, in Porthtowan, Cornwall.  It will produce enough energy to power 10,000 homes[2] – enough to power almost all of Birmingham. So, why don’t we build them everywhere?[3]

While geothermal energy does have significant benefits, it also comes with its own set of problems.  The most prominent of these is the very specific characteristics of the Earth’s crust needed to be able to superheat the steam and power the turbines. As opposed to somewhere like Iceland, on the boundary of a tectonic plate, these locations are few and far between in the UK. Some will unfortunately be located in populous areas, where the negative aesthetics of a power station would outweigh its benefits. Another worrying fact about geothermal plants is that their construction, and the drilling of geothermal wells into the earth’s surface, have been the cause of several earthquakes over the past decade (5.5 magnitude earthquake in Pohang, South Korea in 2017).  While this is less of a risk for the UK, being geologically more stable, it still is a factor to be considered. I would hasten to add that this risk is less than that of CO2 from fossil fuels or the toxic clean-up of a nuclear power station!

While geothermal energy plants are undoubtedly an effective and positive use of the Earth’s natural resources to create a sustainable and consistent supply of energy, the problems that their construction and capabilities raise mean that it would be impossible for them to become the sole provider of the UK’s energy. However, it is undeniable that their existence and use could aid the UK greatly in our battle against greenhouse gases and the climate crisis. While geothermal energy cannot solve the climate problem alone, it should definitely be a part of the UK’s, and the world’s, solution to the threat that is the climate crisis.

 


REFERENCES

[1] https://www.thetimes.co.uk/article/the-energy-answer-is-not-blowin-in-the-wind-xbntdm6pv

[2] https://www.newscientist.com/article/mg24032000-300-supercharged-geothermal-energy-could-power-the-planet/

[3] Check out https://cornishstuff.com/2019/09/11/successful-drilling-at-uks-first-deep-geothermal-plant-in-cornwall/ to see the new Geothermal Plant take shape

 

Will we ever be able to live on the moon?

Isabelle in Year 11 looks at whether we will ever be able to live on the moon, and what this might involve.

 

Ever since man first stepped onto the moon, the possibility of one day living there has become increasingly prevalent. NASA’s several lunar missions have brought back information that shows the potential of a new home for the human race and, with Earth slowly becoming less inhabitable due to global warming, it is now more essential than ever to find a (potentially radical) solution. In our solar system the other planets have extreme temperatures and pressures that would make it impossible for us to survive and, since technology has not advanced enough to send life beyond the moon, it is unlikely the habitable planets outside of our solar system are within reach in the next 100 years.

Astronaut on the moon
Above: Astronaut via Pixabay

Data collected by NASA has shown that the moon’s surface (made up of regolith) has a consistency and cohesiveness of baking flour and although it is similar to sand on the Earth’s surface, it has very different properties. A build-up of electrostatic forces causes the regolith particles to stick to equipment and astronauts’ suits and clouds of dust could become trapped around the wheels of vehicles rendering them immobile. It would definitely be difficult to build infrastructure on this type of surface but a planned Artemis mission in 2024 will send scientists and engineers to the surface to examine the potential.

Water is an essential for humans and although the moon lacks liquid water, molecules can be found trapped in the rocks and minerals or in the form of ice at the poles. This water can be extracted to sustain human life for some time – certainly not the entire of Earth’s population but potentially enough for a moon base. Oxygen for breathing can also be found in the moon’s surface as it makes up 42% of the regolith. This can easily be extracted by robots which NASA have already built prototypes for, and used as fuel for rockets alongside hydrogen. So, the moon already has the raw materials for 2 necessary conditions for humans to live.
Food is a little more complicated. In previous space missions, astronauts have brought light, compact packets of non-perishable food but going back and forth from the moon bringing food every few months would cost a huge amount and a whole civilisation would require a lot more food compared to 3 or 4 astronauts. The moon’s soil contains toxic elements that would kill plants before they would have the chance to grow but experiments have found that if you add human manure, the soil becomes safer to use. This sustainable way of producing food would only need seeds to be brought in the spaceship.

A major difference between the moon and Earth is the strength of gravity. The moon’s gravity is around a 6th of the Earths. This has a negative impact on humans as the weightlessness causes bone density and muscles to deteriorate as they are not being used and heart rate and blood pressure to decrease dramatically. Fitness levels of astronauts have been shown to drop as aerobic capacity reduces by 20-25%. However, there have been no deaths related to lack of gravity over a long period of time and medicine can help our bodies to adapt to the new norm.
Cosmic radiation rarely affects us on Earth due to the ozone layer that protects us from most of the waves however the moon doesn’t have anything like this. Scientists have found that hydrogen can act as a shield and have considered wrapping a form of it around infrastructure. Another option would be to use regolith to create bricks to create housing as this would also protect humans. Much like the Earth, the moon’s poles receive sunlight almost 24/7 and so that would be an excellent option for providing power through solar cells.

Scientists have really thought about just about everything to sustain a base or civilisation of the moon. The problem with this all is the cost. There haven’t been very many missions to the moon due to the expense of building a rocket that contains all the necessary things and the advanced technology such as the rovers that are used to transport astronauts around the surface of the moon. It would currently be impractical as even a handful of people would still require several rockets and as well as robots and technology the idea of sending enough people to even create a base would be impossible for the near future. The dream is not dead yet though. Elon Musk recently became the richest man in the world and he has set his sights on building a small civilisation on the moon among other things through his SpaceX programme and with all the information gathered this could become a reality for the next generations.


Sources:

https://www.nasa.gov/feature/goddard/2019/a-few-things-artemis-will-teach-us-about-living-and-working-on-the-moon

https://www.iop.org/explore-physics/moon//how-could-we-live-on-the-moon#gref

https://theconversation.com/five-things-that-happen-to-your-body-in-space-52940

How are organoids going to change biomedical research?

Microscope

Kate in Year 13 explores how organoids are going to contribute to biomedical research. 

At the moment, biomedical research is almost exclusively carried out in animal models. Although this has led to a better understanding of many fundamental biological processes, it has left gaps in our understanding of human specific development. In addition to this, the variability of human individuals is in sharp contrast to inbred animal models, leading to a deficiency in our knowledge about population diversity.

These limitations have forced scientists to invent a new way of looking at and understanding how the human body works; their conclusions were organoids.

An Organoid (Wikipedia)

Organoids are a miniaturised and simplified version of an organ produced in vitro in 3D which shows realistic micro-anatomy. They originate from renewable tissue sources that self-organise in culture to acquire in vivo-like organ complexity. There are potentially as many types of organoids as there are different tissues and organs in the body. This provides many opportunities such as allowing scientists to study mechanisms of disease acting within human tissues, generating knowledge applicable to preclinical studies as well as being able to offer the possibility of studying human tissues at the same if not higher level of scientific scrutiny, reproducibility and depth of analysis that has been possible only with nonhuman model organisms.

Organoids are going to revolutionise drug discovery and accelerate the process of bringing much needed drugs to reality. Nowadays, the process averages around 20 years from conception to reality. This is a lengthy process mainly due to the fact that the pharmaceutical industry has relied on animal models and human cell lines that have little resemblance to normal or diseased tissue – possibly one of the reasons behind the high failure rate of clinical trials adding to the high cost of drug discovery – an average of $2 billion for each new drug that reaches the pharmacy.

Organoids can help this development by using human cells instead of animal cells due to the improved compatibility, making it quicker and more efficient. Organoids are also able to provide a better understanding of human development.

Organoid graph
Above: Uses of organoids from https://blog.crownbio.com/key-organoid-applications

The human brain, especially the neocortex (which is the part of the mammalian brain involved in higher-order brain functions such as sensory perception, cognition, spatial reasoning and language), has evolved to be disproportionally larger compared with that of other species. A better understanding of this species-dependant difference through brain organoids will help us gain more knowledge about the mechanisms that make humans unique, and may aid the translation of findings made in animal models into therapeutic strategies answering the question what makes humans human.

Organoids are the future of biomedical research providing the potential to study human development and model disease processes with the same scrutiny and depth of analysis customary for research with non-human model organisms. Resembling the complexity of the actual tissue or organ, patient derived human organoid studies will accelerate medical research and generate knowledge about human development which is going to dramatically change the way we are going to study biology in the future.

What is obstetric fistula and why is it so common in developing countries?

Shirley, Year 10, looks at Obstetric Fistula, the most devastating and serious childbirth injury and explores why the injury is so common in developing countries, and the impact it has on the life of women.

Fistula

‘Obstetric Fistula is the worst thing you’ve never heard of’

An obstetric fistula is often considered as ‘one of the most serious and tragic childbirth injuries’, yet very few people have heard of it.

An obstetric fistula is a hole between the vagina and rectum or bladder and is caused by prolonged, obstructed labour without access to timely, high-quality medical treatments such as an emergency C-section. In the case of an obstructed labour, the fetus is unable to come out of the mother’s body, usually because the baby is too big, or is in the wrong position.

Fistula diagram Days of unrelieved labour creates compression and cuts off both blood supplies to the baby and the mother’s internal soft tissue, causing both to die. The dead tissue results in holes (fistulas) in the walls separating the woman’s reproductive and excretory systems.

Why is it so common in developing countries?

Map
Obstetric fistula has been virtually eradicated in developed countries due to the availability of good medical care, such as the C-section and obstetric facilities.

However, this is not the case in developing countries.

Obstetric fistula occurs among women who live in low-resource countries, who give birth without access to medical help. There is often a lack of access to medical facilities, lack of adequately trained medical staff and not enough medical supplies and equipment.

It is estimated that there may be at least two million women and girls, living in poverty, who suffer from fistula. The problem is particularly prevalent in Africa, parts of Asia, parts of Latin America, the Arab States region and the Caribbean.

In many developing countries, girls tend to marry and begin childbearing at a very young age, often before their body is sufficiently developed to cope with this. The lack of formal education and the access to accurate information about family planning, pregnancy and childbirth also make the girls much more vulnerable to an obstructed labour. Cultural beliefs and traditions sometimes also prevent the girls from seeking the necessary medical care they need.

What is the impact of obstetric fistula on the life of women?

‘Obstetric fistula leaves women without hope’

As women with obstetric fistula are unable to control the flow of waste, they are often isolated due to their ‘foul smell’. Her community will almost always detach themselves from her. In many cases, her husband will also leave her and send her back to her own family.

‘She is scorned, bewildered, humiliated and isolated, often being cursed by God.’ – New York Times Column “New life for the Pariahs” on October 31st, 2009

Yet… it is neglected

Despite how life-threatening the condition is, fistula receives very little attention from the media and funding is virtually non-existent, representing 0.07% of annual global health funding. Awareness of fistula is limited as this condition is very rare in Europe and the US.

99% of women who get obstetric fistula will never have a chance at treatment and in order to stop this from happening, we need to raise our awareness of the condition.

Girl


Photos from:

https://www.opfistula.org/obstetric-fistula/

https://www.who.int/features/factfiles/obstetric_fistula/en/ (World Health Organisation)

Further reading:

https://www.fistulafoundation.org/what-is-fistula/

https://www.opfistula.org/obstetric-fistula/

https://www.who.int/features/factfiles/obstetric_fistula/en/

Should the periodic table be turned upside down?

Chemistry beakers

Isabel, Y10, explores the comprehensibility of Dmitri Mendeleev’s traditional periodic table and whether it would be more accessible for younger children and enhance learning methods if it were flipped around by 180˚.

The Periodic Table is an important symbol in Chemistry and since Dmitri Mendeleev’s discovery of the Periodic system in 1869, it has remained the same for 150 years; but could turning it 180˚ make important concepts easier to understand, especially in teaching younger children?

This year has been announced the Year of (Mendeleev’s) Periodic Table which has become the generic way of arranging the elements. However, some scientists like Martyn Poliakoff and his team have started to question the comprehensibility of it. After extensive research, they decided to flip the traditional arrangement upside down, so that the information is more understandable and intuitively ordered.

The research team argues that this presentation is more helpful and has many benefits. Firstly, when the table is flipped the properties of the elements such as atomic mass and proton number now increase from bottom to top therefore making more numerical sense. Secondly, it represents the Aufbau principal more accurately, which states that electrons fill up ‘shells’ from low to high energy. Finally, when young children are trying to learn from the table, the more relevant elements to them are located towards the bottom of the table, making its use quicker and more accessible. Therefore, in lessons, students will not have to look all the way to the top of the table to be able to find the right information.

Above: Inverted Periodic Table. Source: University of Nottingham

Above: Traditional Periodic Table

However, when I compared the two versions of the periodic table myself, I found that the traditional form of the table made more sense to me for many reasons. For example, in both situations I found my eyes drawn to the top row of elements, so it did not matter that the elements that I use the most were on the bottom row. However, this could be put down to a force of habit, so I also asked my 10-year-old brother to look at the two perspectives of the table and see where he looked. He immediately pointed to the top of both and when I asked him the reason he said that from top to bottom ‘is the way you read’ so the properties make more sense going down from top to bottom. He also seemed to prefer the traditional table, commenting that it was like a ‘pyramid’ in the way the numbers were arranged and was a much clearer way to display the elements.

Whilst some may argue that the arrangement of the table is more effective if it were upside down, for me the traditional version of the periodic table works just as well. Testing this principle to a larger group will allow different models to be tried to see if it makes understanding the periodic table easier for younger learners.


References:

Martyn Poliakoff et al, Nat. Chem., 2019, https://www.nature.com/articles/s41557-019-0253-6

 

What would happen if there was no stigma around mental illness?

Mental Illness

Emily, Year 12, explores why there is a stigma around mental illnesses, how we can get rid of this stigma, and what effect the stigma has on society.

Mental illness is not just one disorder – and many people know that – but what they don’t understand is quite how expansive the list of disorders is. As young girls, we are taught about anxiety, body dysmorphic disorder, depression, addiction, stress, and self-harm but the likelihood is that we know – from personal experience, through friends, family or even social media – that many more mental illnesses exist. For example: bipolar disorder, obsessive-compulsive disorder, schizophrenia, autism and ADHD. Chances are, we all know someone with mental illness whether we know or not – the majority of the time these people function the same way that people with no mental illness do. So why is there such a stigma around mental illness and how can we get rid of the stigma?
When the AIDS epidemic started in the early 1980s, the disease was only affecting minority groups of people who already faced criticism. The disease only furthered this and made the patients virtual pariahs until advocacy groups and communities protested to expand awareness and pressured the U.S. government to fund research for the disease and its cure. In only seven years, scientists were able to: identify that the cause of AIDS was the Human immunodeficiency virus (HIV), create the ELISA test to detect HIV in the blood and establish azidothymidine (AZT) as the first antiretroviral drug to help those suffering from HIV/AIDS. This is a prime example of how public knowledge can lead to science pushing the boundaries of their knowledge and finding treatments. Along with treatments eliminating symptoms, they also eliminate the stigma as more and more people are learning about the disease. So why can’t this be the case for mental illness?

In a time when science wasn’t breaking new boundaries every day, and knowledge wasn’t being distributed properly, it is easy to see why those with such complicated illnesses were feared and had such a stigma surrounding them. However, now when the greatest barrier is access to treatments and not the science, and the education about the subject is as high as it has ever been, it is hard to see why there is still such shame in having these illnesses.

But what if there was no stigma? We would have early identification and intervention in the form of screening mechanisms in primary care settings such as GP, paediatric, obstetrics, and gynaecological clinics and offices as well as schools and universities. The goal would be to screen those who are at risk for or are having symptoms of mental illness and engage the patients in self-care and treatment before the illness severely affects their brains, and lives. We would also have community-based comprehensive care for those who are in more advanced stages of illness. This will support people who are unable to care for themselves and who may otherwise end up homeless, in jail or in mental hospitals.
For example: victims of trauma would be treated for PTSD along with any physical injuries while in the hospital to target PTSD before any symptoms started occurring and the patient could hurt themselves or others; first responders would have preventative and decompression treatments routinely administered to treat PTSD before waiting to see who may or may not show symptoms; mothers would be treated for pre/post-partum depression as a part of pre/post-natal check-ups instead of waiting and potentially harming themselves or their baby. Children with learning disabilities would be identified early on so they could get cognitive training, and emotional support to prevent counterproductive frustration due to something they cannot control.

Medical economists have shown that this method of proactive mental healthcare will actually reduce the cost of delivering it. It will also relieve emotional stress (for the patient and their family), financial burden for treatment, and will reduce the occurrence of many of the very prevalent social problems. We all know about the many mass shootings that occur regularly and a great deal of these crimes have been perpetrated by young males who have an untreated mental illness which have presented symptoms for long before the crime was committed – not that I am excusing their behaviour in any way.

As a worldwide community, we must be able to recognise mental illness for what it is – a medical condition that can be treated, be that with behavioural or cognitive therapy or with medication. In order to dissolve the stigma, we must be involved, ask questions, be kind, be compassionate, and make it our own business. There is only so much science can do if people are not willing to take the help they are being given – they need to want to get better. The only way this will happen is if we all help to make it known that having a mental illness is not a bad thing, and that it is easily treatable, and that they are no different from anyone else.

The Brain Chemistry of Eating Disorders

Jo, Year 13, explores what is happening chemically inside the brains of those suffering from eating disorders and shows how important this science is to understanding these mental health conditions.

The definition of an eating disorder is any range of psychological disorders characterised by abnormal or disturbed eating habits. Anorexia is defined as a lack or loss of appetite for food and an emotional disorder characterised by an obsessive desire to lose weight by refusing to eat. Bulimia is defined as an emotional disorder characterised by a distorted body image and an obsessive desire to lose weight, in which bouts of extreme overeating are followed by fasting, self-induced vomiting or purging. Anorexia and bulimia are often chronic and relapsing disorders and anorexia has the highest death rate of any psychiatric disorder. Individuals with anorexia and bulimia are consistently characterised by perfectionism, obsessive-compulsiveness, and dysphoric mood.

Dopamine and serotonin function are integral to both of these conditions; how does brain chemistry enable us to understand what causes anorexia and bulimia?

Dopamine

Dopamine is a compound present in the body as a neurotransmitter and is primarily responsible for pleasure and reward and in turn influences our motivation and attention. It has been implicated in the symptom pattern of individuals with anorexia, specifically related to the mechanisms of reinforcement and reward in engaging in anorexic behaviours, such as restricting food intake. Dysfunction of the dopamine system contributes to characteristic traits and behaviours of individuals with anorexia which includes compulsive exercise and pursuit of weight loss.

In people suffering from anorexia dopamine levels are stimulated by restricting to the point of starving. People feel ‘rewarded’ by severely reducing their calorie intake and in the early stages of anorexia the more dopamine that is released the more rewarded they feel and the more reinforced restricting behaviour becomes. Bulimia involves dopamine serving as the ‘reward’ and ‘feel good’ chemical released in the brain when overeating. Dopamine ‘rushes’ affect people with anorexia and bulimia, but for people with anorexia starving releases dopamine, whereas for people with bulimia binge eating releases dopamine.

Serotonin

Serotonin is responsible for feelings of happiness and calm – too much serotonin can produce anxiety, while too-little may result in feelings of sadness and depression. Evidence suggests that altered brain serotonin function contributes to dysregulation of appetite, mood, and impulse control in anorexia and bulimia. High levels of serotonin may result in heightened satiety, which means it is easier to feel full. Starvation and extreme weight loss decrease levels of serotonin in the brain. This results in temporary alleviation from negative feelings and emotional disturbance which reinforces anorexic symptoms.

Tryptophan is an essential amino acid found in the diet and is the precursor of serotonin, which means that it is the molecule required to make serotonin. Theoretically, binging behaviour is consistent with reduced serotonin function while anorexia is consistent with increased serotonin activity. So decreased tryptophan levels in the brain, and therefore decreased serotonin, increases bulimic urges.

Conclusions

Distorted body image is another key concept to understand when discussing eating disorders. The area of the brain known as the insula is important for appetite regulation and also interceptive awareness, which is the ability to perceive signals from the body like touch, pain, and hunger. Chemical dysfunction in the insula, a structure in the brain that integrates the mind and body, may lead to distorted body image, which is a key feature of anorexia. Some research suggests that some of the problems people with anorexia have regarding body image distortion can be related to alterations of interceptive awareness. This could explain why a person recovering from anorexia can draw a self-portrait of their body image that is typically 3x its actual size. Prolonged untreated symptoms appear to reinforce the chemical and structural abnormalities in the brains seen in those diagnosed with anorexia and bulimia.

Therefore, in order to not only understand and but also treat both anorexia and bulimia, it is central to look at the brain chemistry behind these disorders in order to better understand how to go about successfully treating them.

 

As teachers, do we need to know about big data?

Clare Roper, the Director of Science, Technology and Engineering at WHS explores the world of big data.  As teachers should we be aware of big data? Why, and what data is being collected on our students every day… but equally relevant questions about how we could increase awareness of the almost unimaginable possibilities that big data might expose our students to in the future.

The term ‘big data’ was first included in the Oxford English Dictionary in 2013 where it was defined as “extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations.”[1] In the same year it was listed by the UK government as one of the eight great technologies that now receives significant investment with the aim of ensuring the country is a world leader in innovation and development.[2]

‘Large data sets’ with approximately 10000 data points in a spreadsheet have recently introduced into the A Level Mathematics curriculum, but ‘big data’ is on a different scale entirely with the amount of data expanding at such speed, that it cannot be stored or analysed using traditional methods. In fact, it is predicted that between 2012 and 2020 the global volume of data will increase exponentially from 4.4 zettabytes to 44 zettabytes (ie. 44 x1021 bytes)[3] and data scientists now talk of ‘data lakes’ and ‘dark data’ (data that you do not know about).

But should we be collecting every piece of data imaginable in the hope it might be useful one day, and is that even sustainable or might we be sinking in these so-called lakes of data? Many data scientists argue that data on its own actually has no value at all and that it is only when it is analysed in context that it becomes valuable. With the introduction of GDPR in the EU, there has been a lot of focus on data protection, data ethics and the ownership and security of personal data.

At a recent talk at the Royal Institute, my attention was drawn to the bias that exists in some big data sets. Even our astute Key Stage 3 scientists will be aware that if the data you collect is biased, then inevitably any conclusions drawn from it will at best be misleading, but more likely, be meaningless. The same premise applies to big data. The example given by Maja Pantic from the Samsung AI Lab in Cambridge, referred to facial recognition, and the cultural and gender bias that currently exist within some of the big data behind the related software – but this is only one of countless examples of bias within the big data on humans. With more than half the world’s population online, digital data on humans makes up the majority of a phenomenal volume of big data that is generated every second. Needless to say, those people who are not online are not included in this big data, and therein lies the bias.

There are many examples in science where the approach to big data collection has been different to that collected on humans (unlike us, chemical molecules do not generate an online footprint by themselves) and new fields in many sciences are advancing because of big data. Weather forecasting and satellite navigation rely on big data and new technologies have emerged including astroinformatics, bioinformatics (boosted even further recently thanks to an ambitious goal to sequence the DNA of all life – Earth Biogenome project ), geoinformatics and pharmogenomics to name just a few. Despite the fact that the term ‘big data’ is too new to be found in any school syllabi as yet, here at WHS we are already dabbling in big data (eg. MELT project, IRIS with Ark Putney Academy, Twinkle Orbyts, UCL with Tolcross Girls’ and Tiffin Girls’ and the Missing Maps project).

To grapple with the idea of the value of big data collections and what we should or should not be storing and analysing, I turned to CERN (European Organisation of Nuclear Research). They generate millions of collisions every second from the Large Hadron Collider and therefore will have carefully considered big data collection. It was thanks to the forward thinking of the British scientist, Tim Berners-Lee at CERN that the world wide web exists as a public entity today and it seems scientists at CERN are also pioneering in their outlook on big data. Rather than store all the information from every one of the 600 million collisions per second (and create a data lake), they discard 99.99% of this data as it is produced and only store data for approximately 100 collisions per second. Their approach is born from the idea that although they might not know what they are looking for, they do know what they have already seen [4]. Although CERN is not using DNA molecules for the long-term storage of their data yet, it seems not so far-fetched that one of a number of new start-up companies may well make this a possibility soon. [5]

None of us know what challenges lie ahead for ourselves as teachers, nor our students as we prepare them for careers we have not even heard of, but it does seem that big data will influence more of what we do and invariably how we do it. Smart data, i.e. filtered big data that is actionable, seems a more attractive prospect as we work out how balance intuition and experience over newer technologies reliant on big data where there is a potential for us to unwittingly drown in the “data lakes” we are now capable of generating. Big data is an exciting, rapidly evolving entity and it is our responsibility to decide how we engage with it.

[1] Oxford Dictionaries: www.oxforddictionaries.com/definition//big-data, 2015.

[2] https://www.gov.uk/government/speeches/eight-great-technologies

[3] The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things, 2014, https://www.emc.com/leadership/digital-universe/

[4] https://home.cern/about/computing

[5] https://synbiobeta.com/entering-the-next-frontier-with-dna-data-storage/