Taking a cue from
Henri Bergson’s theory of time, Hafsa in Year 10 examines the science behind
our sense that time speeds up when we are enjoying ourselves
Time is the most used noun in the
English language and yet humans are still struggling to define it, with its
complicated breadth and many interdimensional theories. We have all lived
through the physical fractions of time like the incessant ticking of the second
hand or the gradual change in season, however, do we experience it in
this form? This is a question that requires the tools of both philosophy and
science in order to reach a conclusion.
In scientific terms, time can be
defined as ‘The progression of events from the past to the present into the
future’. In other words, it can be seen as made up of the seconds, minutes, and
hours that we observe in our day-to-day life. Think of time as a one
directional arrow, it cannot be travelled across or reversed but only and
forever moves forward.
One philosophical theory would
challenge such a definition of time. In the earliest part of the 20th
century, the renowned philosopher Henri Bergson published his doctoral thesis,
‘Time and Free Will: An Essay on the Immediate Data of Consciousness’, in which
he explored his theory that humans experience time differently from this
outwardly measurable sense. He suggested that as humans we divide time into
separate spatial constructs such as seconds and minutes but do not really experience
it in this form. If Bergson’s theory is right, our sense of time is really much
more fluid than the scientific definition above suggests.
If we work from the inside out, we can
explore the different areas of our lives which influence our perception of
time. The first area is the biological make-up of our bodies. We all have
circadian rhythms which are physical, mental, and behavioural changes that
follow a twenty-four-hour cycle. This rhythm is present in most living things
and is most commonly responsible for determining when we sleep and when we are
These internal body clocks vary from
person to person, some running slightly longer than twenty-four hours and some
slightly less. Consequently, everyone’s internal sense of time differs, from
when people fall asleep or wake up to how people feel at different points
during the day.
But knowing that humans have slight
differences in their circadian rhythms doesn’t fully explain how our sense of
time differs from the scientific definition. After all, these circadian rhythms
still follow a twenty-four-hour cycle just like a clock. If we look at the
wider picture, what is going on around us greatly affects our sense of time. In
other words, our circadian rhythms are subject to external stimuli.
Imagine you are doing something you
love, completely engrossed in the activity, whether it be an art, a science, or
just a leisurely pastime. You look at the clock after what feels like two
minutes and realise that twenty have actually passed. The activity acts as the
external stimulus and greatly affects your perception of time.
When engrossed in an activity you
enjoy, your mind is fully focussed on it, meaning there is no time for it to
wander and look at the clock. Research suggests that the pleasurable event boosts
dopamine release which causes your circadian rhythm to run faster. Let’s
take an interval of five minutes as a basis for this. In this interval, due to
your internal body clock running faster you feel as though only two minutes have
gone by; time feels like it has been contracted.
By contrast, when you are bored, less
dopamine is released, slowing your circadian rhythm, meaning your
subjective sense of time runs slower. If we use the same example, in an
interval of five minutes, you feel as though ten minutes have gone by and time
feels elongated. This biological process has the power to shape and fluidify
our perception of time.
So, the next time someone says ‘Wow,
time really does fly by when you’re having fun,’ remember that there is much more
science and philosophy behind the phrase than they might realise!
Verity in Year 12 looks at how the pre-Raphaelites
interpreted the tales of King Arthur and his court for a nineteenth century audience,
emphasising their splendour as well as the challenge they could express to
Victorian materialism and industrialisation
The tales of King Arthur and his knights have been popular for
hundreds of years and have been reworked and re-interpreted by different groups
and cultures. Thought to have originated in Celtic, Welsh, and Irish legends,
the figure of Arthur appears as either a great warrior defending Britain from
both human and supernatural enemies, or as a magical figure of folklore.
Though the tales were drawn from different sources – many of
the most famous coming from French writers such as Chretien de Troyes – the
story’s international popularity largely came from Geoffrey of Monmouth’s Historia
Regum Britanniae (History of the King of Britain), Thomas Malory’s Morte
d’Arthur and most importantly Lord Alfred Tennyson’s poems. These works
inspired the second phase in Pre-Raphaelitism. The Pre-Raphaelite Brotherhood
was a group of men (and a few important women) in the 1850s who challenged the
artistic conventions of genre painting to instead ‘go to nature’ and work with
the idea of realism.
Their principal themes were initially religious, but they
also used subjects from literature and poetry, particularly those dealing with
love and death. King Arthur’s knights of the Round Table presented the idea of
an all-male community collectively devoted to a reforming project, and the
brotherhood’s collaborative working patterns also reflected that.
The Arthurian Tales do not focus on one particular person,
genre or event but they encompass numerous people and places – from the
Guinevere and Lancelot scandal to Sir Gawain and his encounter with the
mystical Green Knight; the sorceress Morgan Le Fay and Nimue; Arthur’s wizard
advisor, Merlin, to Arthur’s son, Mordred, who brought about the legendary king’s
ultimate downfall. Some of the most well-known paintings of the movement were
the paintings depicting the The Lady of Shalott by William Waterhouse.
am Half Sick of Shadows, The Lady of Shalott looking at Lancelot and The
Lady of Shalott – William Waterhouse
These paintings tell the story of a woman who is forbidden
to leave her tower and who can only see the outside world through a mirror, otherwise
she will suffer a curse. The first part of the poem was depicted in the 1915
painting I am Half-Sick of Shadows Said the Lady of Shalott. The lady,
trapped in her tower, spends her days weaving the images she sees in the mirror
and at the moment captured in the painting, there are two lovers.
the moon was overhead,
two young lovers lately wed,
am half sick of Shadows’,
the Lady of Shalott.
In the painting there is a poppy
in the reflection of the mirror but not in the foreground which foreshadows the
lady’s death since poppies symbolise eternal sleep. Also, the weaving shuttles
are shaped like boats which predict the final part of the poem. The next part
was shown in the 1894 painting The Lady of Shalott Looking at Lancelot. The
Lady sees Sir Lancelot in her mirror and falls instantly in love. She turns to
properly look at him and the mirror shatters fulfilling the curse.
flew the web and floated wide;
mirror crack’ d from side to side;
curse come upon me,’
the Lady of Shalott.
The lady is painted looking
directly at the viewer with a defiant stare. She is tangled in the threads of
her tapestries holding a small wooden shuttle, preparing to flee in order to
find Lancelot. Within the context of 19th century Britain,
Waterhouse may have implied that she is a woman of agency, defying her
confinement and going after what she desires most. One could also say that
Tennyson’s poem captured many artists’ ideas, especially the Pre-Raphaelite
Brotherhood, about whether they should capture nature truthfully or give their
own interpretation. The Lady’s actions could reflect an artist breaking away
from imagination and seeing the world in real life.
The final instalment of the poem
is captured in The Lady of Shalott painted first in 1888. The lady moves
downstream in a boat inscribed with her name on the prow holding the chains
that tether her. On the boat is a crucifix symbolizing sacrifice and three
candles, two of which have been blown out suggesting that her death will come
soon. The woman’s lips are parted showing how she is singing her final song
before her death.
ere she reach’d upon the tide,
first house by the water-side,
in her song she died,
Lady of Shalott.
Beguiling of Merlin –
Another favourite is The
Beguiling of Merlin painted in 1872 by Edward Burne-Jones. It shows
a snake-crowned Nimue enchanting Merlin with his own sorcery by binding him
into a tree in the forest. Merlin’s helplessness and entrapment is conveyed by
the serpentine lines of the drapery and tree branches that imprison the wizard
and deprive him of his power. This fluidity of line also captures the ebb and
flow of magic. Furthermore, his droopy hands and glass-eyed demeanour reinforce
the sense of paralysis. Like a lot of his work, Burne-Jones explores power
relationships in which a man falls victim to a hidden threatening female power.
Guenevere – William Morris
The character of Guinevere and
the scandal with Lancelot is also repeatedly depicted. Guinevere is the
legendary Queen and wife of King Arthur and in French romantic interpretations
she has a love affair with her husband’s chief knight and friend, Lancelot.
William Morris’s painting Guenevere or La Belle Iseult is a
portrait of the model and wife of Morris, Jane Burden in medieval dress. She
was discovered by Morris and Rossetti when they were working together on the
Oxford murals and she later became a model for several different artworks like
Rossetti’s Proserpine, Sir Launcelot in the Queen’s Chamber and more of
In both Morris’ and Rossetti’s
paintings Burden is cast not only as a queen but as an adulteress. The way life
later imitated art has an uncanny force, particularly considering Burden later
became the lover of Rossetti in the 1860s. The rich colours, the emphasis on
pattern and details such as the missal reveal where Morris’s true talents lay.
He was less at home with figure painting than with illumination, embroidery,
and woodcarving, and he struggled for months on this picture. Most people will
know him as a key figure in the Arts & Crafts Movement having probably seen
his wallpapers and designs.
Last Sleep of Arthur in Avalon
– Edward Burne-Jones
Six and a half metres wide The
Last Sleep of Arthur in Avalon marked the culmination of Burne-Jones’
fascination with Arthurian legend. The painting, which took 17 years and
remained unfinished, depicts a moment of rest and inaction. Mortally wounded,
King Arthur rests his head on the lap of his sister, the fairy queen Morgana,
who took her brother to Avalon after his defeat in battle. At the centre of the
Arthurian tales is the idea that he did not actually die, but sleeps in Avalon,
waiting for the moment when the nation will most need his return.
From its earliest roots the
story of King Arthur has been changing hands and remoulded to fit the cultures
and society of the time. For the Pre-Raphaelites and the Victorian era, the
legend resonated with them. In part they found splendour in it – sparkling armour
and swords, flying banners and beautiful maidens in flowing robes. On the other
hand, Arthur became a symbol of their crusade against the mundane and the crude
materialistic era that was Victorian industrialisation. They reshaped Arthur to
become a means to convey the morals and the monarchical society of the time.
Edward Burne-Jones – Alison Smith
The Art of the Pre-Raphaelites – Elizabeth Prettejohn
Helen S reveals
how the pharmaceutical industry hides unfavourable results from medical trials.
She warns of the risks to human health, and proposes how we can make medical
research more robust and trustworthy
ever questioned the little pills prescribed by your doctors? I had not, until I
began working on this article – and the truth is, we know less than we should
about them. It is scary to think that, though these medications are supposed to
heal us when we are feeling poorly, in reality, that it is not always the case.
trials are experiments or observations done for clinical research that compare
the effects of one treatment with another. They may involve patients, healthy
people, or both. Some are funded by pharmaceutical companies, and some are
funded by the government. I will mainly focus on the phenomenon of hiding
negative data in industry-funded trials.
done in 2005 by Evelyne Decullier, Senior Research Fellow at Hospices Civils de
Lyon, compared registered trials that have failed and those that have succeeded,
and which ones appear in the medical journals and academic literature. They
consistently found that only half of trials are ever published and that positive
results are 2.5 times more likely to be published than negative results.
Now, you might say, ‘how can those trials possibly affect me or other ordinary people?’ Well, read on…
matters for your health
an anti-arrhythmic heart drug and was tested in clinical trials in the 1980s.
The results showed that patients given Lorcainide were far more likely to die
than patients who weren’t. But those
results were not published until 10 years later, and during that time, doctors
had been prescribing the drug to patients. According to Dr Ben Goldacre,
director of DataLab at Oxford University, it has been estimated that more than 100,000
people who had taken Lorcainide died in America as a result. And Lorcainide is not a single case. Similar
things may be happening to other clinical trials relating to drugs such as anti-depressants
or cancer treatment.
The lack of
transparency can also affect decisions on government spending. From 2006 to 2013,
the UK government was advised to buy a drug called Tamiflu which was supposed
to reduce pneumonia and death caused by influenza. The UK government went on to
spend £424 million stockpiling this drug. But when the systematic reviewers tried
to gather up all the trials that have been done on Tamiflu, they realised that
the government had only seen a small number of the trials. They battled for
years to get the trials from the drug company, and when they had finally got
all of them, they found that Tamiflu was not sufficiently effective to justify
that large a cost. If companies continue to withhold trials, similar expensive
trials are going to be repeated, putting the volunteers, patients and doctors in
companies have failed us, so what about the law? In America, it is required
that medical trials held in the US need to be registered and have their results
submitted within one year of the trial finishing. However, when scientists
looked back at the data in 2015, they found out that only 20% of trials were
submitted and reported.
research is not the complete villain in this situation. During these types of
research, discoveries are more likely to occur (Adam, 2005; Adam, Carrier,
& Wilholt, 2006; Carrier, 2004). And thanks to funding from industry,
scientists are less pressured to present something that is directly linked to real‐world use, compared to public or
government-funded projects (Carrier, 2011). And as we all know, new
technologies all start with discoveries.
some suggestions from scientists for improving the current situation: to
increase the transparency, to increase reproducibility and the most doable one,
effective criticism (Elliott,2018). Out of these, the criterion that is the
easiest to modify is to have more effective criticism. It is important to
acknowledge that criticism doesn’t need always to be negative. Though the
agencies that are usually responsible for evaluation can be limited by a
variety of reasons, such as understaffing
or political issues, “they can get more involved in designing the safety
studies performed by industry in specific cases,” suggests Philosopher of Science,
Kevin Elliott. (A safety study is a study carried out after a medicine has been
authorised, to obtain further information on a medicine’s safety, or to measure
the effectiveness of risk-management measures.)
Luckily we have
the technologies in our hands. Alpha Fold is leading the scene: it has done
some amazing and accurate predictions on predicting the 3D shape of proteins,
meaning scientists can facilitate the design of stable protein. It can also help
to make sense of X-ray data to determine crystals structure; before Alpha Fold was
invented, determining the structure of proteins to do structure-based drug
design could take 3-4 years. Now they are presented in front of you in less
than an hour.
different, some people might have allergies, and some drugs might not even work
for some people. To avoid these situations, technologies such as AI could make
your prescription personalised to you. By analysing your DNA information sent
to your pharmacy, AI would analyse the dosage and the drug suitable for you. The
3D printed “polypill” is a single pill that has all the personalised medication
you need in one day in one pill, which is remarkable.
now it is a little easier to understand the importance of transparency in
clinical testing. Trial results were never just numbers – they are directly
linked to the lives of millions. Pharmaceutical companies were not simply hiding
data – they were hiding the deaths of the volunteers and patients, and the
money of families wasted on more expensive but less effective treatments. There
must be, without doubt, serious consequences if companies don’t follow regulations.
I believe there will be hope if the scientists
use technology effectively and if a better research environment is created for
Elsa P traces the history of cats in
Ancient Egypt – from humble ratcatcher to god, and from pet to votive offering
– exploring both how they were represented, and how they lived within Egyptian
Animals are an important presence in many aspects of society,
culture and religion. They act as companions but also symbols, idols and gods. Their
significance goes back centuries and their role in the development of cultures
can be felt today. What I want to look at in this piece is the question, why is
the cat so important?
The earliest historical depiction of the upright tail,
pointing ears and triangular face of the domesticated cat appeared around 1950
BCE, in a painting on the back wall of a limestone tomb around 250 kilometres
south of Cairo, Egypt. After this first feature, cats soon became a fixture of
Egyptian paintings and sculptures and were even immortalized as mummies. Cats
possessed the art of social climbing as they rose in status from rodent killer
to pet to representations of gods. Does this mean the domesticated cat had a
significant impact on the development of ancient Egypt?
Most ancient Egyptian artistic representations of cats were
based on the African wildcat. With a light build, grey coat and black or
light-coloured spots and stripes, the African wildcat is very similar to the
tabby cat that we see in most domestic homes today.
With a prominent farming culture in ancient Egyptian
society, cats were a useful tool to chase away dangerous animals such as
venomous snakes and scorpions but progressively became symbols of divinity and
protection in the ancient Egyptian world.
Paintings on Egyptian tombs show cats lying or sitting below
chairs and chasing birds and playing. A recently discovered pet cemetery
(dating to the 1st and 2nd centuries CE) found on the
outskirts of Berenice, on Egypt’s Red Sea coast holds the remains of cats with
remarkable iron and beaded collars, which are believed to have died of old age.
These discoveries suggest that cats were probably kept as companions and were
loved and respected animals in Egyptian society.
“The ancient Egyptians, in general, did not worship animals.
Rather, they saw animals as representations of divine aspects of their gods,” according
to Julia Troche, an Egyptologist, assistant professor of history at Missouri
State University. In addition to domestic
companionship, cats were seen as vessels that the Egyptian gods chose to
inhabit, and whose likeness such gods chose to adopt. One god that was depicted
as a cat was Bastet, the goddess of the home, domesticity, women’s secrets,
cats, fertility, and childbirth. Bastet was first depicted as a fierce lioness,
but later as a domestic cat and as dutiful mother with several kittens and a
protector of the family. In tomb paintings, a representation of fertility was a
cat sitting under a women’s chair, possibly arising from the fact that a female
cat gives birth to a relatively large litter. Around 5th century BCE
a large cult of Bastet devotees developed in the city of Bubastis near the
modern-day city of Zagazig, north of Cairo. They would gather around a massive
temple and would leave small cat statues as offerings for the Bastet. This
popularity for Bastet persisted for almost another 1,500 years which further
reinforces why the Ancient Egyptians respected and honoured the cat in their
society. Ancient Egyptians thought of cats more generally, as protectors, while
at the same time they respected their ferocity. The god Sekhmet, the goddess of
war, is depicted as a lioness and was said to be a warrior and protector deity
who kept the enemies of the sun god Ra at bay. “In some mortuary texts, cats
are shown with a dagger, cutting through Apopis: the snake deity who threatens
Ra at night in the Underworld,” Julia Troche explains.
As cats were fierce protectors in the eyes of the ancient
Egyptians, it comes as no surprise that they played a vital role in the afterlife.
Because of their highly respected status, the killing of cats in ancient Egypt
was illegal. However, killing for mummification may have been an exception. A
reported the carrying out of X-ray micro-CT scanning on ancient Egyptian mummified
animals. The study explored the skeletal structure of a mummified cat and the
materials used in the mummification process. The results showed that the cat
was smaller than expected and that 50 percent of the mummy was made up of
wrapping. Through dissection of the teeth of the cat, the scientists deduced
that it was around 5 months old when it died and that the cause of the death
was deliberate breaking of the neck. The study concluded that the cat was most
likely purposely bred for mummification to provide votive offerings for the gods
with cat associations. For example, the cat was used as a votive offering for
the god Bastet. Mummified cats were bought by temples to sell to pilgrims who
may have offered the mummified animals to the gods in a similar way that
candles may be offered in churches today. Egyptologists have also suggested
that the mummified cats were meant to act as messengers between people on earth
and the gods. To uphold the demand for such offerings, entire industries were
devoted to the breeding of millions of cats to be killed and mummified, and
also so that they could be buried alongside people. This happened largely
between about 700 BCE and 300 CE.
Cats were respected creatures in ancient Egyptian society. The
representation of the ancient Egyptian gods as cats influenced the citizens’
behaviour towards these animals and played an integral part of religious
practice. They also were a useful tool in the agriculture industry, keeping
pests away from farmland. This admiration is still prominent in today’s western
culture as many people keep cats as home companions and as pest control.
El-Kilany, Engy, Mahran, Heba, What Lies Under the Chair!
A study in ancient Egyptian private tomb scenes, part 1, American Research
Centre in Egypt, 2015
Thomas, Richard, Jones, Rhys, Graves-Brown, Carolyn, Goodridge, Wendy and
North, Laura, Evidence of diet, deification, and death within ancient
Egyptian mummified animals, Scientific Reports, 10(1) online, 20 August
 Johnston, Richard, Thomas, Richard, Jones, Rhys,
Graves-Brown, Carolyn, Goodridge, Wendy and North, Laura, Evidence of diet,
deification, and death within ancient Egyptian mummified animals,
Scientific Reports, 10(1) online, 20 August 2020
Josie M, Transition Representative on the WHS Student Leadership Team, explores how ideologies are constructed in order to justify atrocity, in relation to Britain’s exploitation and colonisation of Africa
Throughout history, and particularly
in relation to the British empire, Britain’s international dominance has been
obtained and developed through the enforcement of British beliefs and culture. When
Africa became the object of Britain’s desire in the late 17th
century, as an integral part of the transatlantic slave trade, Africa’s
apparent lack of ‘civilisation’ – as determined by Britain’s definition – was
used to justify the horrendous treatment of African slaves and was later
instrumental in gaining public support for African colonisation.
This reflects the wider phenomenon of how the ‘western’
notion and interpretation of what constitutes a respectable civilisation has
been severely damaging to the periphery during colonisation and European land
acquisition. I am taking the term ‘western’ to refer to ideas that have been
popularised in economically developed areas in Europe; and ‘periphery’ to refer
to countries that are viewed as less economically developed nations with ‘poor
communications and sparse populations’: ‘Defined
in geographical or sociological terms, the centre represents the locus of power
and dominance and importantly, the source of prestige, while the periphery is
subordinate’ (Mayhew, 2009).
circulated definition describes civilisation as a complex society, concerned
with so-called civilised things;
Money, Art, Law, Power, Culture, Organised belief systems, Education,
Hierarchy, Trade and Agriculture (Dictionary.com, 2022).This proposed
definition contains only a limited range of
categories for observation: 17th and 18th century Britain
also used similar definitions to decide what constituted a respectable
civilisation. When British explorers and leaders
arrived in Africa, they found peoples and cultures that operated very
differently to the commercial towns and cities of Britain. Technological
advancements within Britain meant that emerging industrialisation within towns
and cities was considered a major manifestation of civilisation, so Africa’s lack of these particular trademarks led to
the continent being branded as ‘Darkest Africa’, which was the idea that the
people were savage and brutal and incapable of governing themselves.
As a result of this racist
ideology, African people were then portrayed as uncivilised and inferior to the
white British classes who sought to rule and profit from them. As the cradle of
humanity, African kingdoms and tribes had been evolving for thousands of years,
all the while developing rich histories and cultures that oversaw daily life.
However, this vibrant tapestry of language, music, art, customs, trade, and
religion was not seen as such – in the eyes of the British, the tribal system
and Africa’s lack of advanced weaponry and technology meant that it was their
‘rightful place’ to be subservient to the colonising powers, and slavery was a
means of achieving this submission.
The shifting uses of
The development of the transatlantic slave trade and the
eventual European colonisation of Africa are heavily intertwined, with the racist
ideology developed during the slave trading period resurfacing and being used
to justify colonisation. Across
Europe, Christianity had long since been associated with civilised society and
as being the pinnacle of world religion. Christian teachings were used to
justify the poor treatment of slaves and their forced removal from their
homelands. Propaganda that Africans “worshipped
the devil, practiced witchcraft, and sorcery” among other evils was rife, these
practises directly opposing many values held by religious Europeans. As a
result, one of the foremost Christian missions was employed: to evangelise and
spread the word of God.
Pseudo-scientific race theories
were also beginning to emerge at this time, suggesting that black races were
genetically inferior to white races and so God required that they serve their
white masters. Christianity’s evangelical mission was utilised in justifying
the removal of African people from their ‘devil worshipping’ cultures and
bringing the ‘heathens’ to Christian lands where they could be saved by the
Gospel and brought into the light, thereby spreading the faith and achieving
one of the primary objectives of the religion.
These race theories and evangelising missions were later
turned on their head when slavery was abolished on 1 May 1807. The trade did
not stop instantaneously: Britain continued to expand its economic horizons
through increased trade with India and the Far East, in order to maximise its
global reach. By the later 1800’s, colonial expansion into Africa became the
new object of European interest and the ‘Scramble for Africa’ formally began
with the Berlin Conference in 1885.
During African colonisation, the popular Christian mission altered
from justifying ferrying slaves to Christian countries to deliver them
liberation through the Word of God, to directly opposing slavery and all the
evils associated with the practice. This altering of common Christian beliefs
about slavery was employed by British leaders to gain support for direct
British involvement in Africa. David Livingstone was an extremely popular and
influential explorer and missionary at the time; calling for a worldwide
crusade to defeat the slave trade controlled by Arabs in East Africa. The
British were then able to turn the tide of belief by establishing their own
moral authority on the issue, they created another enemy in the Arabs and were
then able to present their quest for land acquisition as ethical because they
were supposedly assisting in the eradication of slavery – a system they had
exacerbated enormously – by fuelling colonisation and increasing their
involvement in Africa.
Livingstone’s three C’s: Commerce, Christianity and
Civilisation were then employed as the main objectives of British
administration in Africa. In this way, Britain portrayed itself as a more
innocent party that was merely extending a great opportunity to Africa to
modernise in the same way it had. However, this was not the reality of the
situation and the true motives behind imperial expansion: competition, and
profit, were often disguised behind this veil of apparent moral authority. The
plight of many African nations that suffered at the hands of European expansion
was then blamed on themselves, on their own ‘savage’ ways, when in fact
European nations were instrumental in causing many of the issues of corruption,
instability and poverty, which persist as legacies of colonisation today.
In conclusion, the western definition of civilisation was warped
and used by the British to justify a campaign of control and submission
throughout Africa. This method of obtaining control allowed Britain to profit
and develop on an immense scale, whilst the African nations that Britain
occupied had their natural resources exploited, their people dehumanised, and
their cultures and ways of life demonised as savage and barbaric. In many
cases, Christianity was used as a means to justify these actions because it was
seen as such a pinnacle of civilisation by the Europeans, and was believed to
go hand-in-hand with respectable society.
Over 12 million African people were forcibly removed from
their homeland and sold into slavery during the transatlantic trade, and
millions more suffered as a result of colonisation and extortionate land
acquisition by global powers. And critical in enabling human suffering and
exploitation on such as massive scale, was the damaging western definition of
civilisation, which resulted in extraordinary pseudo-scientific race theories being
used to justify horrific actions.
Agnes P. in Year 9 takes us on a lively whistle-stop tour of key features and sights in the history of Classical Western Architecture, looking at the three main styles – Ancient Greek, Roman and Byzantine – that underpin the architecture we see around us today
Architecture governs our lives. We live in a metropolis and everywhere
we turn there is a new street with buildings from a variety of eras that give
us the ability to eat, sleep and to live. In the Palaeolithic period, roughly
2.5 million years ago, when humans lived in huts and hunted wildlife for food,
the key purpose of architecture was to provide shelter, but now, we have many
uses for it, due to the wealth, wisdom and resources amassed by humanity over
2.5 million years. But we can still trace the roots of much modern architecture
back to ancient times.
Archaic architecture from as early as the 6th
century BC has influenced many architects over the past two millennia. If you
have ever been to the British Museum, a building designed to mimic the Greek
style, and looked up at the columns just before the entrance, you will have
noticed the ornate capitals, decorated with scrolls and Acanthus leaves. They
are derived from the two principal orders in Archaic architecture: Doric and
Ionic. The Doric order occurred more often on the Greek mainland where Greek
colonies were founded. The Ionic order was more common among Greeks in Asia
Minor and the Islands of Greece. These orders were crucial if you were an
architect living in 600 BC. Temples were buildings that defined Greek
architecture. They were oblong with rows of columns along all sides. The
pediment (the triangular bit at the top) often showed friezes of famous scenes
in the bible or victories achieved by the Greeks. The wealth that was
accumulated by Athens after the Persian Wars enabled extensive building
programs. The Parthenon in Athens shows the balance of symmetry, harmony, and
culture within Greek architecture; it was the centre of religious life and was
built especially for the Gods to show the strength in their beliefs. Greek
architecture is very logical and organised. Many basic theories were founded by
Greeks and they were able to develop interesting supportive structures. They
also had a good grasp of the importance of foundation and were able to use
physics to build stable housing.
The Romans were innovators. They developed new construction
techniques and materials with complex and creative designs. They were skilled
mathematicians, designers and rulers who continued the legacy left by Greek
architects. Or as the Greeks might put it: pretentious copycats who stole their
ideas and claimed them as their own. We sometimes forget that the origins of
Roman Architecture lay within Greek history. Nonetheless, brand new
architectural structures were produced, such as the triumphal arch, the
aqueduct, and the amphitheatre. The Pantheon is the best-preserved building
from Ancient Rome, with a magnificent concrete dome. The purpose of the
pantheon is unclear but the decoration on the pediment shows that it must have
been a temple. Like many monuments, it has a chequered past. In 1207 a bell
tower was added to the porch roof and then removed. In the Middle Ages, the
left side of the porch was damaged and three columns were replaced. But despite
further changes, the Pantheon still remains one of the most famous buildings
and the best preserved ancient monument in the world. It even contains the
tombs of the Italian monarchy and the tomb of Raphael, an Italian renaissance
painter. Roman architecture is known for being flamboyant, and many features reflect
the great pride of this culture, such as the great pediments, columns, and statues
of Romans doing impressive things. These all show off their understanding of
mathematics, physics, art, and architecture. Many American designs have been
inspired by this legacy, including the White House and the Jefferson Memorial,
which couldn’t look more Roman if it tried.
Byzantine architecture was the style that emerged in
Constantinople. Buildings included soaring spaces, marble columns and inlay,
mosaics, and gold-coffered ceilings. The architecture spread from
Constantinople throughout the Christian East and in Russia. Hagia Sophia is a
basilica with a 32-metre main dome, dedicated to the Holy Wisdom of God. The
original church was built during the reign of Constantine I in 325 AD. His son
then consecrated it in 360 AD and it was damaged by a fire during a riot in 404
AD. In 558 AD an earthquake nearly destroyed the entire dome and so it was
rebuilt on a smaller scale. It was looted in 1204 by the Venetians and the
Crusaders until after the Turkish conquest of Constantinople in 1453. Mehmed II
converted it into a mosque but in 1935 it was made a museum. But then it was converted
back into a mosque in 2020. The history of the Pantheon looks paltry compared
to the history of Hagia Sophia!
Byzantine architecture remains as a reminder of the spiritual and cultural life of people who lived in the Byzantine era. The use of mosaic during the Byzantine era has inspired modern architects to create themed works using gold mosaic to evoke beauty, religiosity, and purity.
Sources: Encyclopædia Britannica The London Library MetMuseum – The Metropolitan Museum of Art Website
Grace S, Year 13 Student, writes about the recent Biology trip to visit the Francis Crick Institute.
WARNING – This article will include mentions in a biomedical sense to some topics which some readers may find disturbing, including death, cancer and animal testing.
Last Friday some of the Biology A-level students were privileged enough to go on a trip to the Francis Crick Institute. All sorts of biomedical research goes on inside the Institute, but we went with a focus on looking at the studies into cancer. During our day, we visited the ‘Outwitting Cancer’ exhibition to find out more about the research projects and clinical trials that the Crick Institute is running; we had a backstage tour of the Swanton Laboratory to learn about the genetic codes of tumours and find out more about immunotherapy and I attended a discussion on the impact pollution can have on the lungs.
We started our trip by visiting the public exhibition ‘Outwitting Cancer’, with Dr Swanton as our tour guide. We first walked through a film showing how tumours divide and spread using representations from the natural and man-made world. This film also showed that tumours are made up of cancer cells and T-cells (cells involved in the immune response) trying to regulate the growth of the cancer cells. We then moved through to an area where several clips were playing outlining the different projects underway at the Crick Institute in regards to cancer. There were many different projects on display about different clinical trials and research projects looking into understanding and fighting cancer, but the one which fascinated me the most involved growing organoids (otherwise known as mini-organs) from stem cells. The stem cells would be extracted from the patient and used to grow these organoids, which would then be used to see how they respond to different drugs. This would allow each treatment to be highly specified to the patient, and so perhaps lead to higher survival rates among these patients. In this same section of the exhibition there was a rainbow semi-circle of ribbons with stories clipped to these ribbons written by visitors to the exhibit of their experiences with cancer, ranging from those who have a lived experience, to those who are simply curious to learn more. It was a fantastic exhibit and I recommend you give it a visit yourself, it’s free!
As interesting as this exhibit was, for us the highlight was a backstage tour of the Swanton Laboratory followed by talks from members of the team working there. We learnt that they have found that there is homogeneity within tumours, a fact that was not known just a few years ago. What this means is that different sections of tumours have completely different genetic codes. This could significantly change the way which tumours are analysed and treatments are prescribed. Previously, one tumour sample was thought to be representative of the whole tumour, it is now known that this is not the case and multiple samples from different sections of the tumour should be taken to get a comprehensive view of its structure and how best it could be treated. Linked to this, one member of the team, a final year PhD student, showed us graphics which they had been able to take and colour of the different cells present in a tumour. One of the main reasons cancer develops to the point where treatment is needed is because the body’s immune system has failed to neutralise the cancer cells, they were working to find out why this may be. In one of the graphics shown to us, a different type of immune cell had actually formed a wall around the T-cells, preventing them from reaching the cancer cells in order to eliminate them. This would be important knowledge when considering immunotherapy treatments, which encourage the body’s own immune system to fight back against the cancer. In this case there would be little benefit to injecting or strengthening T-cells, as they would not be able to reach the cancer cells. Immunotherapy itself is still a relatively recent invention, and it is still considered only after treatments such as chemotherapy have not been effective. By this stage the cancer is more advanced and much harder to treat with immunotherapy, so it is hoped that in the future immunotherapy will be considered before more generalised treatments such as chemotherapy.
Work is also being done to understand late-stage cancer. We were allowed into one of the stations where practical work is done (wearing red lab coats to indicate that we were visitors) and shown a series of slides showing where biopsies (a biopsy is the removal of a tissue sample) might be taken from a tumour. It was explained to us that TRACERx (the name of the project being undertaken in the Swanton Laboratory) had set up a programme where people living with late-stage cancer can consent to their tumours being used for post-mortem research. Often these individuals had also signed up for earlier programmes, so information on their cancer at earlier stages was available and it was possible to see how the cancer had progressed. It was also explained to us several of the methods used to store samples, including the use of dry ice (solid carbon dioxide) and liquid nitrogen.
The final presentation I attended (we were on a carousel in small groups) discussed the influence of pollution on lung cancer. It had previously been found that as we age, the number of mutations we have grows, so clearly mutations are not the sole cause of cancer as not everyone develops cancer. It has now been theorised that carcinogens, such as particle matter found in air pollution, activate these pre-existing mutations. Currently non-smokers comprise 14% of all lung cancer cases in the UK, as the number of smokers drops as people become more aware of the dangers of smoking, the proportion of people with lung cancer who are non-smokers will increase, making research as to what may cause this lung cancer even more important. Lung cancer in non-smokers is currently the 8th largest cause of death to cancer in the UK. Two on-going experiments are studying the effect of exposure to pollution on mutations in the lungs. One is being run within the Institute, exposing mice to pollution, and another in Canada, where human volunteers are exposed to the level of pollution average in Beijing for two hours. Whilst it is unlikely that this exposure will lead to new mutations, it may cause changes in those already present. All of the research projects presented to us are ongoing, and it really was a privilege to see what sort of work is going on behind the scenes.
All of us were incredibly lucky to be able to go on this trip and meet some of the scientists working on such fascinating projects within the Francis Crick Institute. Most of us were biologically-minded anyway, but were we not, this trip certainly would have swayed us.
Alexia P. Head Girl, analyses the historic and future impact of trees on the economy.
‘Money doesn’t grow on trees’. A cliché I’m sure
most people will have heard when they were younger; when they had no
understanding of the true value of money.
However, is this cliché wrong – are there economic benefits to trees?
As of 2020, there are approximately 3.04 trillion
trees on the planet, made up of 60,065 different species. Their uses vary, from
being produced into something tangible, such as paper or furniture, or
providing intangible services, such as the carbon cycle or retaining nutrients
in biomass to aid farmers in growing crops. Over time, although their uses may
have changed, trees have always been a vital part of our economy, in ways that
at first, may not be apparent.
Let’s jump back in time. The year is 1690, and the
global dominance of the British Empire is growing. In Britain, most of the
population are in the primary sector of employment, particularly in
agriculture, growing trees to help build houses, or to trade for an animal to
increase income for the household. As timber and fruits were traded amongst
farmers, incomes increased. However, as more villages were established, space
that was previously forestland was cleared of trees, and the supply started to
diminish. The navy – at the time, the biggest in the world – relied on the
timber for their ships; to continue to expand their fleet, they had to travel
further abroad. Ships then travelled to America, India, and Europe to gain
resources, power, and valuable influence to create trading alliances that are
still in place today. This extra money and resources gave Britain an advantage
when The Industrial Revolution hit in 1760. This allowed for a quick and smooth
integration of the new, more efficient way of life that asserted Britain further
as a global power and further boosted its economy. And all of this stemmed from
the reliance and resources of trees, without which, the roots of our economy
would not stand today.
However, as countries developed, their reliance on
single resources and tangible products have decreased, particularly in ‘advanced’
countries in favour of services and jobs in tertiary and quaternary sectors. As
a result, agriculture – such as timber production – has steadily decreased.
But trees still play a vital part in the growth of
our economy today. In LIDCs and EDCs, such as Brazil, logging and mass
production of wood has become part of the economy. Although the industry is
environmentally frowned upon, it has an estimated worth of $200 billion
annually, allowing many developing countries who produce this material to place
money into developing infrastructure and technology further. There are not only
economic benefits. In some societies, such as in parts of Indonesia, trees and
wood have been used as currency on a local scale, allowing people to trade wood
for farming animals, or clothes, encouraging economic movement in smaller
villages, that may not have reliable national trading routes. Paper, furniture
and fuel are just some other ways that trees have become so heavily relied on
in people’s lives, with few other ways to substitute the valuable resources
However, the rate at which tree resources are
exploited is becoming too high. In the quest to become economically developed, forest
sustainability has been forgotten. Increasing tropical deforestation rates account
for loss of biodiversity and reduction in carbon intakes,affecting further tree
growth in surrounding areas as nutrients are removed.
There have been recent attempts, however, to
preserve the trees and rainforests. In a recent study by Yale School of
Forestry and Environmental Studies, it was determined that rainforests store
around 25% of carbon dioxide, with the Amazon alone strong 127 billion tons. To
release these gases would heavily increase the enhanced greenhouse effect,
changing the balance of the Earth’s ecosystems.
Sustainable income from trees is becoming more
apparent, particularly in countries where deforestation rates are highest. In
Bangladesh, where fuel industry relies on 81% wood, the logging industry has been
encouraged to collect dead trees, wood waste and pruning rather than felling
increased sections of forest. This still allows for an income, whilst ensuring
trees remain part of the ecosystem. Furthermore, there has been a global effort
to move away from the use of wood entirely. Reusable energy, such as solar
power, makes up 26% of the global energy used and is expected to rise to 45% by
2045. Although this means the usage of trees in the economy will decline, it
allows for new income sources, such as eco-tourism that encourages more
environmentally aware holidays; for example, Samasati lodge, Costa Rica. The
lodge uses rainwater instead of transporting water through pipes; is built on
stilts rather than the ground as not to disrupt run-off water to rivers; and
blends in with surroundings to ensure not to disturb local wildlife in attempts
to make holidays more environmentally sustainable, whilst still taking economic
advantages of trees.
‘Money doesn’t grow on trees’. Well, since 2016 in
the UK, it hasn’t. Our bank note system changed from paper to plastic, showing
the progression from a society that once relied on a single produce, to a new, man-made
source. This well represents our economy today and our declining reliance on
trees: what was once the roots of our economy will soon become a thing of the
Cara H, Editor of Unconquered Peaks, looks at the key reasons that led David Cameron to hold the 2016 EU referendum.
In this essay I focus on the factors which led to the 2016 Referendum being held, rather than the result. David Cameron called the 2016 EU Referendum on the UK’s membership of the European Union (EU) in 2015, giving the British public the right to decide whether their future would be in or out of the EU. They chose to leave the EU by a margin of 51.9% leave, versus 48.1% remain. The UK-EU relationship has always been complicated and fraught, ever since joining in 1973. Factors analysed are ‘important’ as they led to Euroscepticism in British politics or the British public, and/or led to political pressure on Cameron to hold a referendum on EU membership.
I argue that the UK’s historic
relationship with the EU contrasts sharply with their current aims. As for
immigration, general anti-immigration sentiment, and the rise of UKIP (which are
very much linked) strongly contributed to Euroscepticism and political pressure
on Cameron. I also touch on Cameron himself, and his decision making around quelling
A transactional vs political
always viewed the EU differently to our European friends. Whilst most of Europe
see themselves as European, Britons are the least likely to have Europe form
part of their identity (see graph below), and do not have the same allegiance
to Europe in comparisons to the German or French. Instead, we view our
relationship with the EU as transactional, through a cost-benefit, economic
analysis. This can be clearly traced back to our original reasons for joining.
In the late
1950s, Britain was experiencing a post-war economic rut, while Germany and
France were experiencing strong growth. Britain’s spheres of influence were
declining, and trade with the USA and Commonwealth had decreased. This led to
the belief that joining the bloc might remedy the UK’s economic problems.
Macmillan, the UK Prime Minister at the time, “saw the European Community
as an economic panacea… here was a way in which the British economy could
overcome so many of its problems without resorting to a radical and painful
domestic economic overhaul” (Holmes, n.d.) This analysis of
Britain’s reasons for joining contrasts sharply with the EU’s increasingly
political aims. Though Britain arguably shares the aims of the European Project,
it does not share the same desire to become one with Europe and is interested
in the EU only economically. Having joined the EU for economic reasons, and later
being faced with political integration, increased tensions.
tensions between an economic, free trade-based union and a political
integratory one, have been the backdrop of the UK’s interactions with the EU.
For example, the Eurozone Crisis in the UK especially damaged views towards
Europe, not simply because of what happened, but because the ‘cost’ of
remaining a member became highlighted. The heightened tensions within the
political establishments of the UK and the EU have seeped into the general
public psyche. Therefore, the dual nature of the EU as a trade-bloc and a
political union had a negative impact on the UK’s relationship with the EU, by
increasing Euroscepticism, and in turn increasing political pressure on Cameron
to hold a referendum in 2016.
Immigration concerns conflated
movement is enshrined in the EU’s ‘DNA’. As stated in 1957 in the Treaty of
Rome, it can be defined as ‘EU nationals having the right to move freely within
the European Union and to enter and reside in any EU member state’ (Bundesministeriums des Innern, 2015). Non-EU immigration
levels have always been higher than EU immigration levels. Meaning that the
argument around freedom of movement as a cause of unsustainable immigration has
been greatly exaggerated. It is the perception of EU immigration that has
stuck; the EU became synonymous with immigration of any kind, whether this is misguided
level of non-EU and EU immigration put pressure on aspects of British culture
which are not so open to those perceived as ‘non-British’. Integration is often
difficult for those of a different culture. For example, differences in
language, traditions and skills, can lead to those with a strong sense of British
national identity perceiving immigrants negatively, as they threaten what some
see as British culture. And yet this immigration concern is incorrectly conflated
with the EU, as the majority of immigration to the UK has little to do with the
European Union (though one could also argue that all British anti-immigration
sentiment is largely unfounded, regardless of the place of origin). An
excellent paper by Chatham House presents a cross analysis of people’s voting
choices (leave vs remain), compared to their attitudes towards immigration
(both non-EU and EU). The trait that most divided the ‘leavers’ from the ‘remainers’
was their attitudes towards immigration and British culture: nearly ¾ of
‘outers’ agreed that ‘Immigration undermines British culture”.
Therefore, this cultural negativity towards immigration manifests itself in many ways, one of which is opposition to the EU, through the conflation of (any) immigration with EU membership. One of the EU’s most sacred principles is freedom of movement, and the growing number of immigrants since the UK’s membership of the EU has only increased this Euroscepticism, which increased the likelihood of EU-UK referendum.
UKIP’s sudden rise
founded in 1991 and can be categorised as a single-issue party, with the sole
aim of bringing the UK out of the EU, via a referendum. Once Nigel Farage
became leader of UKIP in 2006, it grew in popularity, with gains in the 2013
local elections (22% of the vote), two Conservative Party defections to UKIP in
2013, and impressive results in both the 2014 European Parliament elections
(largest number of seats with 24) and the 2015 General Election (12.5% of the
popular vote). They were most certainly on the up.
led to Cameron’s electoral position becoming increasingly threatened: UKIP is a
right-wing party, whose voters were more likely to be white and older than that
of Labour’s electorate. Therefore, UKIP was able to split the Conservative vote
(Martill, 2018). In 2014, UKIP managed to gain over a
quarter of votes in European Parliament elections, outnumbering the
Conservatives. Understandably, this was a clear threat to the Conservative
Party at the time. Though support for UKIP was clearly influenced by other
factors, (i.e factors that pushed voters towards UKIP), UKIP managed to harness
Euroscepticism in the general public, and transform this into meaningful
political pressure on David Cameron to hold a referendum. The nature of UKIP’s
rise – sudden, large, and at a time when the Conservatives did not have a
majority (pre-2015 General Election), was a very important factor in leading to
the referendum. Arguably, UKIP’s pressure on Cameron led him to hold an
election, lest he lose public and potentially party support, and inevitably, a
general election. Therefore, due to the rise of UKIP, a party based on support
for a referendum on the EU, Cameron was incentivised to put a referendum
promise in his party’s manifesto in 2015 and hold one in 2016, in order to keep
his Conservative Government in office.
for a quick fix
Minister is by far the main source of authority over whether to hold a
referendum or not, so analysing Cameron is important in answering this essay’s
question. Cameron’s decision around party management was an impactful factor in
leading to the 2016 EU Referendum.
of a referendum can be seen as a ‘quick fix’ method of appeasement to the
Eurosceptic backbenchers. As is clear from the rise of the Conservative
Eurosceptic faction, heightened tensions were forming in the Conservative Party
from 2013 onwards, and this threatened the Party’s ability to govern. Hence,
Cameron felt compelled to manage his party over Europe, by delegating the
decision to the public. When the referendum was initially promised in June
2013, Cameron was concerned with stopping the backbenchers rebelling in the
coalition. He wanted to silence the Eurosceptic wing of the party that had
caused so much trouble for the party over the years; an ‘easy fix’ to a
longstanding problem (Martill, 2018). A comment that encapsulates this, is
from Donald Tusk (former President of the European Council), recounting his
meeting with Cameron after the referendum was announced in 2013:
“Why did you decide on this referendum, [Tusk recounts asking Cameron this] – it’s so dangerous, even stupid, you know, and he told me – and I was really amazed and even shocked – that the only reason was his own party… [He told me] he felt really safe, because he thought at the same time that there’s no risk of a referendum, because his coalition partner, the Liberals, would block this idea of a referendum” (BBC, 2019).
party management was very influential in Cameron’s decision-making. Therefore,
the decision desire to repair the divide in his party, was hugely impactful in
leading to the 2016 EU Referendum.
conclusion, the nature of our relationship with the EU, immigration sentiment, UKIP
and Cameron’s decision making were the most important factors in leading to the
EU Referendum. Especially impactful was UKIP’s ability to harness
Euroscepticism into political pressure. But arguably, the end of our EU
membership was spelt out from the beginning.
BBC, 2019. Inside Europe: Ten
Years of Turmoil. [Online]
Available at: https://www.bbc.co.uk/programmes/b0c1rjj7
[Accessed 29 06 2021].
des Innern, f. B. u. H. B., 2015. Freedom of movement. [Online]
Available at: https://www.bmi.bund.de/EN/topics/migration/law-on-foreigners/freedom-of-movement/freedom-of-movement-node.html
[Accessed 29 06 2021].
House, 2015. Britain, the European Union and the Referendum: What Drives
Available at: https://www.chathamhouse.org/sites/default/files/publications/research/20151209EuroscepticismGoodwinMilazzo.pdf
[Accessed 9 20 2021].
2015. National versus European identification, s.l.: s.n.
M., n.d. The Conservative Party and Europe. [Online]
Available at: https://www.brugesgroup.com/media-centre/papers/8-papers/807-the-conservative-party-and-europe
[Accessed 9 20 2021].
B., 2018. Brexit and Beyond: Rethinking the Futures of Europe. London:
Andrea T, Academic Rep, looks at the nature of globalisation and whether with the context of our history we can consider it a ‘new phenomenon’
Globalisation is an ever-present force in today’s society. Scholars at all levels debate the extent of its benefits and attempt to discern what life in a truly globalised world would entail. But where did it all begin? A comparison of the nature of colonialisation and globalisation aid our understanding of this phenomenon’s true beginning, yet no clear conclusion has been reached. This leads us to the matter of this essay, an attempt at answering the age-old question: “Is globalisation a new phenomenon?” Though there are striking similarities between both colonialisation and globalisation, I do not believe we can see them as them one and the same. Due to the force and coercion that characterised colonisation’s forging of global cultural connectivity, and the limitations of colonial infrastructure, we cannot consider it true globalisation. Therefore, though imperfect, the globalisation of the modern world is its own new phenomenon.
Before I can delve into the comparisons of colonisation and globalisation,
we must first gain a common understanding of the characteristics of both. There
is no set definition for globalisation, though most definitions portray it as
an agglomeration of global culture, economics and ideals. Some also allude to
an ‘interdependence’ on various cultures and an end goal of homogeneity. (One
could certainly debate whether this reduction of national individuality is
truly a desirable goal, but that is sadly not the purpose of this essay.) Furthermore,
for the purpose of this argument, homogenisation is taken on the basis of
equality; equal combination of culture forming a unique global identity. And
the focus of this essay will be the sociological aspects of globalisation, as
opposed to the nitty gritty of the economics.
Though we are far from a truly homogeneous world, we
certainly see aspects of it in the modern day. With an increase in
international travel and trade, catalysed by the rise of technology and
international organisations, we have seen the emergence of mixed cultures and
economies. Take for example the familiar ‘business suit’. Though it is seen as
more of a western dress code, all around the globe officials and businesspeople
alike don a suit to work, making them distinctly recognisable. One might
however consider how truly universal this article of clothing is. Its first
origins are found in the 17th Century French court, with a
recognisable form of the ‘lounge suit’ being seen in mid 19th
Century Britain, establishing it firmly as a form of western dress. We then
later see, with its rise to popularity in the 20th century (as
international wars brought nations closer), the suit and many other western
trends adopted across the globe (see picture below). Considering the political
atmosphere of the time, and the seeming dominance of the West, we may doubt
that the adaptation of the suit was an act of mutual shared culture. And yet we
see the ways in which the suit has been altered as it passed to different
cultures. Take the zoot suit, associated with black jazz culture, or the
incorporation of the Nehru jacket’s mandarin collar (Indian origin) into the
suits popularised by the Beatles. Though it still remains largely western, with
the small cultural adaptations we can see how something can be universalised
and slowly evolve towards homogenisation. In this way, a symbol as simple as
the suit can be representative of a globalising world.
This is also where we start to see the link between
colonisation and globalisation form. Trade formed an essential part of each
colonial empire – most notably, the trade of textiles. Through the takeover of
existing Indian trade (India in fact formed 24% of world trade prior to its
colonisation), British-governed India exported everything from Gingham to tweed, and had a heavy influence on the style of the
society’s elite, taking inspiration from the traditional Indian methods of
clothes-making. Furthermore, this notion of the business suit can be seen as
early as when Gandhi arrived in Britain (seeking education on law), dressed in
the latest western trends. However, though the two do certainly share
characteristics, we must consider the intent behind this blend of culture. The
ideal of globalisation suggests an equality that is not echoed in colonisation.
Gandhi did not wear western styles because of his appreciation of British
fashion trends, but instead knew that it was far easier to assimilate if you looked
and acted the same. Similarly, influence of Indian dress on British dress was
not from a place of appreciation either, but from one of exploitation.
Therefore, though the sharing of culture is present in both globalisation and
colonisation, one cannot consider them to be the same due to the underlying
intent. Furthermore, as the intent in modern day globalisation is in some ways
similarly exploitative, one cannot consider the world truly globalised, but
rather globalising, through a process one could still consider a new
Another aspect of globalisation we can consider
is the role of the media. McLuhan, a 20th century Canadian
professor, capitalised on this by proposing the idea of a ‘global village’ that
would be formed with the spread of television. His theories went hand-in-hand
with the ideas surrounding ‘time-space compression’ that have come about due to
travel and media. And McLuhan was right, with a newly instantaneously connected
world we have become more globalised. With the presence of international
celebrities, world-wide news and instant messaging we have the ability to share
culture and creed, and though far from homogenous we can certainly see small
aspects of global culture beginning to form. Due to this dependence of
globalisation on technology it is therefore hard to view colonisation as
early-stage globalisation. But one can make one distinguishing link. One could
argue: the infrastructure implemented for trade routes served as the
advancements in technology of the imperial time. Similar to air travel, with the
creation of the Suez Canal and implementation of railways, it was easier to
traverse the globe. This is what further catalysed open trade and contact
between different nation states, one of the most recognisable traits of
globalisation. However, despite this, the trade routes did not improve
communication anywhere near to the level we see today, and the impact
technology has had on the connectivity of our globe is too alien to
colonisation for the two to be considered the same. In terms of
interconnectivity, the form of globalisation we see today is entirely novel,
and though they have the same underlying features, the difference between the
two remains like that of cake and bread.
Another aspect of globalisation we can consider is the
spread of religion. Religion is an incredibly important aspect of a country’s
culture, defining law and leadership for hundreds of years. The American
political scientist Huntington explored religion and globalisation in his work:
‘The Clash of Civilisations’ (1996) in which he put forward the following thesis:
due to the religio-political barriers, globalisation will always be limited.
But events have challenged this. There has been a rapid
spread of religion around the world due to the newfound (relative) ease of
migration and the access to faith related information through the internet. From
London (often dubbed a cultural ‘melting-pot’) to Reykjavik (rather the
opposite), we see Mosques and other religious institutions cropping up. With the
lack of religious geographical dependence, we see the homogenising effect of
globalisation. This is also to some extent echoed in colonisation. During the
years of the British Empire, colonisation followed a common narrative of the
white saviour. Missionaries preached a new and better way of life, supposing that
the application of Christian morals and values would help develop the ‘savage’
indigenous tribes. This attempt at integrating western Christian culture into
the cultures present across Africa and Asia shows an early attempt at a homogenised
culture. However, though there was certainly some success in the actions of the
missionaries (as seen with the establishment of many churches across South
Africa), the aggressive nature of this once again contradicts the fairness
implied in the concept of a homogenous culture, and globalisation remains a new
One cannot dispute that colonisation does share a number of
characteristics with globalisation. From free trade to new infrastructure to
the mixing of culture through religion and fashion, we can certainly see aspects
of a globalising world. And yet the forceful intent of the homogenisation of
cultures seen in the colonial era, removes it from being the true
interconnectivity of nations. This is not to say that the world today is free
of this intent, but the way in which our world today is globalising is approaching
the ideal of globalisation more closely than colonisation ever did, and there
is a distinct enough difference between the two that one cannot consider
colonisation to truly be an early-stage globalisation. Furthermore, the world
today relies so heavily on technology as a facilitator of globalisation that
any notion of globalisation in the 19th century cannot be considered
one and the same. Therefore, the globalisation of our day and age can be
considered its own new phenomenon.
“Globalization Is a Form of Colonialism.” GRIN,
“Globalization versus Imperialism.” Hoover Institution,
Steger, Manfred. “2. Globalization and HISTORY: Is Globalization a New
Phenomenon?” Very Short Introductions Online, Oxford University
“What Is Globalization?” PIIE, 26 Aug. 2021,
Maddison, Angus “Development Centre Studies The World Economy Historical
Statistics: Historical Statistics” OECD Publishing, 25Sep. 2003,
Chertoff, Emily. “Where Did Business Suits Come from?” The
Atlantic, Atlantic Media Company, 23 July 2012,