Taking a Stand in Hollywood: WAG and SAG-AFTRA Strikes Explained

Written by: Emilia Lovering

N.B As of the date of writing this article, negotiations between SAG-AFTRA and Alliance of Motion Picture and Television Producers (AMPTP) are still ongoing.

As many of you may be aware, this summer saw almost unprecedented levels of striking in Hollywood, as both the Writers Guild of America, as well as the Screen Actors Guild – American Federation of Television and Radio Artists both went on strike, protesting working conditions in Hollywood. This article seeks to examine the causes, impacts and results of both strikes.

WGA Strike:

The 2023 Writers Guild of America Strike is the largest strike action taken by screenwriters in America since the 2007-8 strike over residuals based on DVD selling, as the Guild argued that residuals were a key part of any writer’s income. Also, in this strike it was agreed that all streaming cinemas were subject to the Guild’s Minimum basic wage. However, in 2023, this agreement has not stood the test of time subject to new streaming services, as the amount of media that was put out straightaway instead on cable TV increased dramatically. Essentially, this means that more and more writers are being paid a minimum salary for the work that they do. Additionally, streaming services are much more likely to offer fewer writing jobs and shorter contracts – effectively becoming gig writers. Reducing writers’ rooms and the amount of time writers are able to spend on set dramatically decreases the options available to new writers coming into the industry, as they are offered much less experienced than their more well-established counterparts. Instead, unpaid internships have been offered by studios to ‘allow’ a writer to visit the set on the show that they effectively helped come into being. Moreover, the strike comes at a time where there is growing concern about automation in creative industries, where AI is being produced that creates scripts far closer to one created by a human in the industry – however, due the nature of Ais, such as ChatGPT, this work is able to be produced by sampling already-written works. Therefore, AI is at risk of taking writers jobs and using their intellectual properties, without giving writers due credit. The lack of respect given to writers by Hollywood is intolerable, with President Carol Lombardini of AMPTP saying, that writers should be grateful to have “Term employment.” (Thurm, 2023).

Thankfully, after what became the second longest strike in Hollywood industry AMPTP and the WGA finally reach an agreement. Although the exact results and wording of the agreement has not been released, we do know that the negotiation results in increases to minimum wage, compensation, pension, health fund rates, length of employment, size of writing teas and royalties. They also reached agreements regarding social media, where AI is banned from being used to exploit writers, although the writers themselves may be able to use ChatGPT in their own work. (Bar, 2023)

Writer’s Strike:
(Chris Pizzello / Associated Press)

SAG-AFTRA Strike

SAG-AFTRA faces similar problems to WGA, albeit from a different perspective. But much like the writers, they are also in disputes about residuals from shows – which are only guaranteed when shows are repeated on network television. On streaming services, the actor’s work can be kept up in perpetuity and they are not compensated for the levels of viewership, which is not shared by the networks. Despite attempts to resolve these disputes in 2019, the pandemic quickly shut down negotiations and undermined all progress taken. Similarly, actors receive far less of the profits of films when they are delivered straight to streaming – which many mid-budget films are. With streaming, actors face a flat payment whereas with cinema, the higher the box office takings, the more opportunity there is for an actor to be paid fairly for their contributions. And much like writers, the actors are at threat from the increased AI used in film. As I wrote about in 2022, regarding the use of ‘deep faking’ actors in film (insert link here), there is little to no legal protection for an actor’s image, which can be increasingly exploited by streaming services using new technologies. SAG-AFTRA is a union that may also be joined by influencers, which means that they also may be afforded greater security under possible agreements, yet, as many influencers have been involved in continuing to promote TV and film (forbidden under the SAG-AFTRA strike rules) they will banned from ever rejoining the union (Shoard, 2023).

Unfortunately for the actors and influencers, the results of the strike are still up for negotiation, yet AMPTP has now finally entered negotiations, willing to make new agreements.

Impacts of the Strikes:

Whilst the strike has been a triumph for the WGA, and will hopefully mitigate the effects on streaming and AI on the creative industries, there have been many serious effects for the creative industries. During the strikes, many film and crew workers in Hollywood, who also are facing unfair pay for the work they do, have been out of work for months. Many remain wholeheartedly in support of strike, yet thousands of dollars of income have been lost, not just in California, but also for the 1.7 billion people who work in the film industry outside of the state (Wilson, 2023). Many jobbing actors also are facing financial pressure, as the vast majority of actors do not make the kind of salaries, we envisage them having – many are having to rely on foodbanks and charitable donations in order for the strikes

to continue. Even UK film and TV workers have been forced to find new jobs and claim benefits due to Hollywood strikes (Breese, 2023). In order for the film industry to recover, AMPTP need to realise the value that writers and actors hold for Hollywood and compensate them fairly, before it becomes too late, and the film industry faces a max exodus of employees.

Bibliography

Bar, N. (2023, September 28). The Hollywood’s writers’ strike is over – and they won big. Retrieved from Vox: https://www.vox.com/culture/2023/9/24/23888673/wga-strike-end-sag-aftra-contract

Breese, E. (2023, September 16). UK film and TV workers forced to find new jobs and claim benefits due to Hollywood strikes. Retrieved from Big Issue: https://www.bigissue.com/news/employment/writers-strikes-uk-tv-film-industry-jobs-benefits-hollywood/

Shoard, A. P. (2023, July 14). The Hollywood actors’ strike: everything you need to know. Retrieved from The Guardian: https://www.theguardian.com/culture/2023/jul/14/the-hollywood-actors-strike-everything-you-need-to-know

Thurm, E. (2023, May 5). All About the Writers Strike: What Does the WGA Want and Why Are They Fighting So Hard for it? Retrieved from GQ: https://www.gq.com/story/writers-strike-2023-wga-explained

Wilson, T. (2023, August 10). Hollywood strikes’ economic impacts are hitting far beyond LA. Retrieved from NPR: https://www.npr.org/2023/08/10/1192698109/hollywood-strikes-economic-impacts-are-hitting-far-beyond-la#:~:text=%22It’s%20devastating%20to%20this%20industry,of%20money%20lost%20is%20tremendous.%22

Preventing a dangerous game of hide-and-seek in medical trials

Helen S reveals how the pharmaceutical industry hides unfavourable results from medical trials. She warns of the risks to human health, and proposes how we can make medical research more robust and trustworthy

Have you ever questioned the little pills prescribed by your doctors? I had not, until I began working on this article – and the truth is, we know less than we should about them. It is scary to think that, though these medications are supposed to heal us when we are feeling poorly, in reality, that it is not always the case.

Clinical trials are experiments or observations done for clinical research that compare the effects of one treatment with another. They may involve patients, healthy people, or both. Some are funded by pharmaceutical companies, and some are funded by the government. I will mainly focus on the phenomenon of hiding negative data in industry-funded trials.

Research done in 2005 by Evelyne Decullier, Senior Research Fellow at Hospices Civils de Lyon, compared registered trials that have failed and those that have succeeded, and which ones appear in the medical journals and academic literature. They consistently found that only half of trials are ever published and that positive results are 2.5 times more likely to be published than negative results.

Now, you might say, ‘how can those trials possibly affect me or other ordinary people?’ Well, read on…

Why this matters for your health

Lorcainide is an anti-arrhythmic heart drug and was tested in clinical trials in the 1980s. The results showed that patients given Lorcainide were far more likely to die than patients who weren’t.  But those results were not published until 10 years later, and during that time, doctors had been prescribing the drug to patients. According to Dr Ben Goldacre, director of DataLab at Oxford University, it has been estimated that more than 100,000 people who had taken Lorcainide died in America as a result. And Lorcainide is not a single case. Similar things may be happening to other clinical trials relating to drugs such as anti-depressants or cancer treatment.

The lack of transparency can also affect decisions on government spending. From 2006 to 2013, the UK government was advised to buy a drug called Tamiflu which was supposed to reduce pneumonia and death caused by influenza. The UK government went on to spend £424 million stockpiling this drug. But when the systematic reviewers tried to gather up all the trials that have been done on Tamiflu, they realised that the government had only seen a small number of the trials. They battled for years to get the trials from the drug company, and when they had finally got all of them, they found that Tamiflu was not sufficiently effective to justify that large a cost. If companies continue to withhold trials, similar expensive trials are going to be repeated, putting the volunteers, patients and doctors in danger.

Pharmaceutical companies have failed us, so what about the law? In America, it is required that medical trials held in the US need to be registered and have their results submitted within one year of the trial finishing. However, when scientists looked back at the data in 2015, they found out that only 20% of trials were submitted and reported.

Industry-funded research is not the complete villain in this situation. During these types of research, discoveries are more likely to occur (Adam, 2005; Adam, Carrier, & Wilholt, 2006; Carrier, 2004). And thanks to funding from industry, scientists are less pressured to present something that is directly linked to real‐world use, compared to public or government-funded projects (Carrier, 2011). And as we all know, new technologies all start with discoveries.

Finding remedies

Here are some suggestions from scientists for improving the current situation: to increase the transparency, to increase reproducibility and the most doable one, effective criticism (Elliott,2018). Out of these, the criterion that is the easiest to modify is to have more effective criticism. It is important to acknowledge that criticism doesn’t need always to be negative. Though the agencies that are usually responsible for evaluation can be limited by a variety of reasons, such as  understaffing or political issues, “they can get more involved in designing the safety studies performed by industry in specific cases,” suggests Philosopher of Science, Kevin Elliott. (A safety study is a study carried out after a medicine has been authorised, to obtain further information on a medicine’s safety, or to measure the effectiveness of risk-management measures.)

Luckily we have the technologies in our hands. Alpha Fold is leading the scene: it has done some amazing and accurate predictions on predicting the 3D shape of proteins, meaning scientists can facilitate the design of stable protein. It can also help to make sense of X-ray data to determine crystals structure; before Alpha Fold was invented, determining the structure of proteins to do structure-based drug design could take 3-4 years. Now they are presented in front of you in less than an hour.

Everyone is different, some people might have allergies, and some drugs might not even work for some people. To avoid these situations, technologies such as AI could make your prescription personalised to you. By analysing your DNA information sent to your pharmacy, AI would analyse the dosage and the drug suitable for you. The 3D printed “polypill” is a single pill that has all the personalised medication you need in one day in one pill, which is remarkable. 

Hopefully, now it is a little easier to understand the importance of transparency in clinical testing. Trial results were never just numbers – they are directly linked to the lives of millions. Pharmaceutical companies were not simply hiding data – they were hiding the deaths of the volunteers and patients, and the money of families wasted on more expensive but less effective treatments. There must be, without doubt, serious consequences if companies don’t follow regulations.  I believe there will be hope if the scientists use technology effectively and if a better research environment is created for future generations.

Approaches to the use of online language tools and AI to aid language learning

Adèle Venter, Head of German at WHS, considers how, in a time when Google Translate has insidiously pervaded every homework task, students could be trained to use online language tools and AI to aid their language learning rather than lead them astray.

 

A few years ago – some of my students may still remember it – my Year 10 German class experienced a dark moment. Upon handing back their homework essays, I asked them to write me a note about the extent to which they had used Google Translate to complete their homework.

The atmosphere was grim as they sat writing their confessions.

It reminded me a bit of the confessing sheep in Animal Farm and I almost felt sorry for them. But no – this had to end. I explained to them how I was in fact not assessing their progress and understanding but rather how well (or not – as was still the case at the time) Google’s artificial intelligence manages to translate language completely out of context. I illustrated to them how they were sometimes unable to even translate the German in their essays, and how therefore, they had learnt nothing in the process, making my conscientious attempts to provide feedback on their writing a waste of time.

The Google Translate dilemma

Of course, this has been a much-discussed topic and the bane of foreign language teachers’ lives for some time now, as illustrated by this Twitter joke that did the rounds:

I still stand by everything I had said on that day. And I would like to think that it may have changed their outlook somewhat. But I have since changed my approach to it. Because, as the saying goes, if you can’t beat them, join them.

Ultimately, it is also true that the Internet has become enormously useful in helping people with language acquisition. In the first instance, various language-learning applications have seen the light of day and people casually engage with these on various levels. If it means more people are able to buy croissants in France, or have a basic conversation with their grandchildren who live in Italy, it must be a good thing, right?

Unfortunately, the one thing that has remained true for the acquisition of a foreign language is that there is no quick and easy way to do so. I am of the firm belief that to really learn a language, it takes a lot of time, dedication and perseverance, and that your best chance of becoming proficient is to combine the formal learning of its grammar and vocabulary with immersion and exposure in authentic contexts.

Can AI tools play a useful role?

And so my question is mainly: what are the implications of the use of online tools for the dedicated language learner?

As a linguist, I do not deny that I use these myself all the time. But instead of just modelling my use of online dictionaries, conjugators and such, I have decided to engage my students more fully in the conversation so that they can be conscious of the advantages and pitfalls to various tools. I have told my students that I do not consider Google Translate to be one of the seven deadly sins anymore. After all, online translators have made enormous strides in past years, and a student workshop with Mrs Rachel Evans, our Director of Digital Learning, has revealed that more often than not, they tend to translate phrases and sentences, even idioms correctly.

Instead, I spread the message that whatever students do, they must ensure that they remain in charge of the things they write down. If they do not understand what they are writing, or why sentences are formulated in a certain way, they cannot hope to learn from it. I have consequently set up the following rules as guidance:

  1. Always turn to the dictionary first. There are excellent online dictionaries, and it is worth knowing which ones can be trusted to be correct and informative. It is important that they should understand that verified dictionaries offer synonyms, context and more information about the word, which translators do not. Dictionaries are a great source for developing intuition around words in varying contexts. The more advanced student could also draw on etymology. In the making of a linguist, these are skills well worth developing.
  2. Use online technology to enhance knowledge, not replace it. If pupils use the structures they have mastered as a starting point, they could explore replacing elements of the sentence (such as verbs by researching via a dictionary or conjugator).
  3. Keep the channels of communication open. Let your teacher know how you came by a certain word or phrase. I ask my students to highlight phrases they have constructed using a translator and indicate how they researched it. What were they trying to say? Going back to my second rule of course, are there ways of bringing across their meaning, using the structures they can already manage?

At a more advanced level, language learning becomes increasingly adventurous and as students gain independence, they are able to use language tools to develop the sophistication and concision of their expression. It is mainly younger students who experience frustration around their limited ability to express themselves. The following scenario serves as the perfect example of such a problem. A multilingual girl in Year 9 who is used to expressing herself effortlessly in various languages, produces the following sentence:

„Ich liebe Little Women weil es mich zum Weinen brachte.“

I love Little Women because it brought me to tears.

“Brought” as the imperfect form of the mixed verb “to bring” was rather more than I had counted on at her level and true enough, she did not understand the verb she had used, having typed in “it made me cry”. In fact, there is a myriad of grammatical complexities in this sentence that she had not yet mastered; she could not hope to construct such a sentence with her level of skill. Instead, a well-chosen adjective in an opinion phrase would have been within her reach and might have expanded her repertoire.

Learning to be independent and in control

I hope that having an open discussion will help students to become conscious of problems such as the example shown here and encourage them to use verified sources, finding those tools that are worthwhile learning aids. If they approach it with the right mind, these tools could help them to become truly skilled linguists who are able to reflect on elements of language in a sophisticated way. If language teachers can succeed in creating such healthy learning habits, they are likely to make a meaningful contribution towards developing students’ independence and ability to be life-long learners in the age of technology.

Who is in control? The human being.

GROW 2.0: a Review

Mr Ben Turner, Assistant Head Pastoral at WHS, looks at some of the key messages from last week’s Grow 2.0 conference, looking at what it means to be Human in an A.I. World.

 

Panel
Discussions and debate from our recent GROW 2.0 Conference

Two weeks ago, I wrote about the troubling determinism of social media and the corrosive effect of echo chambers on our beliefs. At GROW 2.0 however, Robert Plomin talked to us of a different kind of determinism. In a mesmerising, if slightly worrying, lecture he enthralled us all with his ground-breaking work into, what he calls, the ‘DNA Revolution’. I say worrying because, according to Plomin, 60% of any child’s GCSE attainment is down to their genetics. The other 40%? Well, there are no systemic factors, that scientists have yet identified, that make a discernible difference in a child’s attainment.

Plomin debunked outdated notions of nature vs. nurture and instead asked us to think about our genetic predispositions.  He warned that we must never mistake correlation for causation. If, for example, a parent reads to their five-year-old every night, it is easy for us to believe that that child’s predilection for books and literature later in life is because of their parent’s diligence at that early age. Plomin would argue however that we have missed the point entirely and ignored the correlation of the parent’s love of reading being passed, genetically, to their child.

This is a powerful message to share with teachers and parents. As a school and, in these turbulent times, a sector we offer a huge variety of activities, interests and passions to those we educate. It is all too easy, as a teacher, parent or pupil to put on your GCSE blinkers and ignore the world around you. If 60% of the outcome is determined by our genetics, why not embrace that other 40%? Fill that time and energy with all of the ‘non-systematic’ activities, trips, hobbies and sports that you possibly can. Because, if we are still not sure what actually makes a difference, variety of engagement is surely the best possible choice.

 

We were lucky enough to also hear from Professor Rose Luckin, a leading thinker in artificial intelligence and its uses in education. It was inspiring to hear the possibilities ahead of us but also reassuring to hear the primacy, from someone truly immersed in the field, of the human spirit. Rose talked about an ‘intelligence infrastructure’ that is made up of seven distinct intelligences. The most important of these for her were the ‘meta-intelligences’, for example, the ‘meta-subjective’ and ‘meta-contextual’. It is our ability to access others’ emotions and our context “as we wander around the world” that Luckin believes separates us from even the most exciting advancements in A.I.

VR
Does VR have a role in education in the future? How can it not have a role given the exciting opportunities it offers?

 

As an educator, where I think I gained the most excitement from Rose’s talk were the possibilities for bespoke and tailored learning for every child. The use of data to help us with the educational needs of learners has some amazing possibilities. One could imagine every child having an early years assessment to understand the penchants and possibilities that lie ahead. This could lead to a bespoke path of access arrangements and curriculum for each child. A possibility that, as Rose said, is truly exciting as we will finally be able to “educate the world”.

More photos of the event on Flickr

GROW 2.0 – Being Human in an AI World

On Saturday 21st September we host our second Grow Pastoral Festival. The theme for this year is an examination of what it is to be human in a machine age. What questions should we be asking about the way technology affects our lives and what are our hopes for the future? More specifically, how will our young people develop and grow in a fast-paced, algorithmically driven society and what might education look like in the future?

 
In the morning session Professor Rose Luckin and Professor Robert Plomin will be giving keynote addresses, and then talk with our Director of Digital Learning & Innovation, Rachel Evans.
Prof Luckin specialises in how AI might change education; Prof Plomin has recently published Blueprint, a fascinating read about genetics and education. We can’t wait to talk about how education might get personalised, and how that change might affect our experience of learning.

In the afternoon we’ll dive into some provocative debate with Natasha Devon, Hannah Lownsbrough and Andrew Doyle, addressing questions of identity, wellbeing and community in an online age with our own Assistant Head Pastoral, Ben Turner.

So what kind of questions are in our minds as we approach this intellectually stimulating event? Ben Turner brings a philosophical approach to the topic.


Is our ever-increasing reliance on machines and subscription to the ‘universal principles of technology’[1] eroding our sense of empathy, compassion, truth-telling and responsibility?



Our smartphones give us a constant connection to an echo-system that reflects, and continuously reinforces, our individual beliefs and values. Technology has created a world of correlation without causation, where we understand what happened and how it happened but never stop to ask why it happened. Teenagers are understandably susceptible to an eco-system of continuous connection, urgency and instant gratification. It is these values that they now use to access their world and that inform them what is important in it.

Are tech giants like Amazon, Google and Facebook creating a monoculture that lacks an empathy for its surroundings? If we all become ‘insiders’ within a technology dominated society, pushing instant buttons for everything from batteries to toilet roll, are we losing the ability to see things from a fresh perspective? By raising children in a world of instant access and metropolitan monism are we creating only insiders; young people who will never gain the ability to step back and view what has been created in a detached way. How as parents, schools and communities do we keep what is unique, while embracing the virtues of technological innovation?

Is social media destroying our free will?

If you are not a determinist, you might agree that free will has to involve some degree of creativity and unpredictability in how you respond to the world. That your future might be more than your past. That you might grow, you might change, you might discover. The antithesis to that is when your reactions to the world are locked into a pattern that, by design, make you more predictable – for the benefit of someone or something else. Behaviourism, developed in the 19th Century, believes in collecting data on every action of a subject in order to change something about their experience, often using punishment or reward to enact the change. Is social media, through its algorithms, gratification systems and FOMO, manipulating our actions and eroding our free will?

Social media is pervasive in its influence on the beliefs, desires and temperaments of our teenagers and you do not have to be a determinist to know that that will lead to a disproportionate level of control over their actions. Does social media leave our young people with no alternative possibilities; locked in a room, not wanting to leave but ignorant to the fact that they cannot?

Is social media the new opium of the masses?

Social media has changed the meaning of life for the next generation. The change in human contact from physical interactions to those, arguably superficial, exchanges online is having not only a well-documented detrimental effect on individual young people but also on the very fabric and makeup of our communities.

In addition to the ongoing concerns about privacy, electoral influence and online abuse, it is becoming increasingly obvious that social media has all the qualities of an addictive drug. Psychologists Daria Kuss and Mark Griffiths wrote a paper finding that the “negative correlates of (social media) usage include the decrease in real life social community participation and academic achievement, as well as relationship problems, each of which may be indicative of potential addiction.”[2]

That is not to say that everyone who uses social media is addicted. However, the implications of the ‘heavy’ usage of social media by young people are increasingly painting an unpleasant picture. The UK Millennium Cohort Study, from the University of Glasgow, found that 28% of girls between 13 and 15 surveyed spent five hours or more on social media, double the number of boys survey who admitted the same level of usage. Moreover the NHS Digital’s survey of the Mental Health of children and young people in England[3], which found that 11 to 19 year olds with a “mental disorder” were more likely to use social media every day (87.3%) than those without a disorder (77%) and were more likely to be on social media for longer. Rates of daily usage also varied by type of disorder; 90.4% of those with emotional disorders, for example, used social media daily.

Panel Discussion

However, there is more to this than just the causal link between the use and abuse of social media and poor mental health. With the march of technology in an increasingly secular world, are we losing our sense of something greater than ourselves? Anthony Seldon calls this the “Fourth Education Revolution”, but as we embrace the advances and wonders of a technologically advanced world do we need to be more mindful of what we leave behind? Da Vinci, Michelangelo and other Renaissance masters, not only worked alongside religion but also were inspired by it. Conversely, Marx believed Religion to be the opium of the people. If social media is not to be the new opium, we must find a place for spirituality in our secular age. Even if we are not convinced by a faith, embracing the virtues of a religious upbringing seems pertinent in these turbulent times. Namely inclusivity, compassion and community, because if we do not, then very quickly the narcissistic immediacy and addictive nature of social media will fill the void left in our young peoples’ lives, becoming the addictive drug that Marx forewarned against.


References:

[1] Michael Bugeja, Living Media Ethics: Across Platforms, 2nd Ed. 2018

[2] Online Social Networking and Addiction – A review of Psychological Literature, Daria J. Kuss and Mark D. Griffiths, US National Library of Medicine, 2011

[3] November 2018

As teachers, do we need to know about big data?

Clare Roper, the Director of Science, Technology and Engineering at WHS explores the world of big data.  As teachers should we be aware of big data? Why, and what data is being collected on our students every day… but equally relevant questions about how we could increase awareness of the almost unimaginable possibilities that big data might expose our students to in the future.

The term ‘big data’ was first included in the Oxford English Dictionary in 2013 where it was defined as “extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations.”[1] In the same year it was listed by the UK government as one of the eight great technologies that now receives significant investment with the aim of ensuring the country is a world leader in innovation and development.[2]

‘Large data sets’ with approximately 10000 data points in a spreadsheet have recently introduced into the A Level Mathematics curriculum, but ‘big data’ is on a different scale entirely with the amount of data expanding at such speed, that it cannot be stored or analysed using traditional methods. In fact, it is predicted that between 2012 and 2020 the global volume of data will increase exponentially from 4.4 zettabytes to 44 zettabytes (ie. 44 x1021 bytes)[3] and data scientists now talk of ‘data lakes’ and ‘dark data’ (data that you do not know about).

But should we be collecting every piece of data imaginable in the hope it might be useful one day, and is that even sustainable or might we be sinking in these so-called lakes of data? Many data scientists argue that data on its own actually has no value at all and that it is only when it is analysed in context that it becomes valuable. With the introduction of GDPR in the EU, there has been a lot of focus on data protection, data ethics and the ownership and security of personal data.

At a recent talk at the Royal Institute, my attention was drawn to the bias that exists in some big data sets. Even our astute Key Stage 3 scientists will be aware that if the data you collect is biased, then inevitably any conclusions drawn from it will at best be misleading, but more likely, be meaningless. The same premise applies to big data. The example given by Maja Pantic from the Samsung AI Lab in Cambridge, referred to facial recognition, and the cultural and gender bias that currently exist within some of the big data behind the related software – but this is only one of countless examples of bias within the big data on humans. With more than half the world’s population online, digital data on humans makes up the majority of a phenomenal volume of big data that is generated every second. Needless to say, those people who are not online are not included in this big data, and therein lies the bias.

There are many examples in science where the approach to big data collection has been different to that collected on humans (unlike us, chemical molecules do not generate an online footprint by themselves) and new fields in many sciences are advancing because of big data. Weather forecasting and satellite navigation rely on big data and new technologies have emerged including astroinformatics, bioinformatics (boosted even further recently thanks to an ambitious goal to sequence the DNA of all life – Earth Biogenome project ), geoinformatics and pharmogenomics to name just a few. Despite the fact that the term ‘big data’ is too new to be found in any school syllabi as yet, here at WHS we are already dabbling in big data (eg. MELT project, IRIS with Ark Putney Academy, Twinkle Orbyts, UCL with Tolcross Girls’ and Tiffin Girls’ and the Missing Maps project).

To grapple with the idea of the value of big data collections and what we should or should not be storing and analysing, I turned to CERN (European Organisation of Nuclear Research). They generate millions of collisions every second from the Large Hadron Collider and therefore will have carefully considered big data collection. It was thanks to the forward thinking of the British scientist, Tim Berners-Lee at CERN that the world wide web exists as a public entity today and it seems scientists at CERN are also pioneering in their outlook on big data. Rather than store all the information from every one of the 600 million collisions per second (and create a data lake), they discard 99.99% of this data as it is produced and only store data for approximately 100 collisions per second. Their approach is born from the idea that although they might not know what they are looking for, they do know what they have already seen [4]. Although CERN is not using DNA molecules for the long-term storage of their data yet, it seems not so far-fetched that one of a number of new start-up companies may well make this a possibility soon. [5]

None of us know what challenges lie ahead for ourselves as teachers, nor our students as we prepare them for careers we have not even heard of, but it does seem that big data will influence more of what we do and invariably how we do it. Smart data, i.e. filtered big data that is actionable, seems a more attractive prospect as we work out how balance intuition and experience over newer technologies reliant on big data where there is a potential for us to unwittingly drown in the “data lakes” we are now capable of generating. Big data is an exciting, rapidly evolving entity and it is our responsibility to decide how we engage with it.

[1] Oxford Dictionaries: www.oxforddictionaries.com/definition//big-data, 2015.

[2] https://www.gov.uk/government/speeches/eight-great-technologies

[3] The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things, 2014, https://www.emc.com/leadership/digital-universe/

[4] https://home.cern/about/computing

[5] https://synbiobeta.com/entering-the-next-frontier-with-dna-data-storage/

Artificial Intelligence and the future of work

By Isabelle Zeidler, Year 7.

What is AI, and how will it change our future?

Firstly, so that AI works, there are three key requirements: data, hardware and algorithms. An example of data are the words in a dictionary saved on a computer. You need this because otherwise Google Translate won’t work. Hardware is necessary so that the computer is able to store data. Lastly, algorithms are what many of us know as programming; the function so that we can do something with our data.

The history of AI is longer than we imagine; we have used AI since 1950. Machine Learning (ML) is a kind of AI. We have used ML since 1980. The most modern kind of ML, AI is Deep Learning (DL). Many of us do not know about this, but a lot of us know the companies that use it. One of the most advanced companies in DL are Google and IBM Watson. So why is DL so amazing? ML has some kind of coding of rules given by programmers. DL learns these rules by observation. This is similar to what happens when babies learn to speak – they rely on observing others.

There are four amazing skills which AI can do:

  • computer vision
  • natural language processing
  • complex independent navigation
  • machine learning

Not all AI use all of these abilities. Some examples of computer vision would include the new passport control at the airport. Another example which is very popular is face recognition in an iPhone X or Surface Pro. The second skill is natural language processing. This is the ability to understand language. A relevant example is Alexa. In the future, some call centres will also use AI’s ability to understand language (it has already started). For example, when you call a bank, a robot will be able to answer even complex inquiries, not just tell you the account balance. Complex independent navigation examples are modern technology ideas like drones and planes.

Do you think that AI may soon even be better than humans?

Well, it is happening already. When focusing on image recognition and accuracy, some scientists compared machines with humans. Human’s accuracy is at 97%. But AI’s accuracy has changed dramatically. Eight years ago, machines were 65% accurate. In 2016, machines were equal to humans, both 97%. Today, in 2018, machines are even better than humans. This is why AI is very likely to change our world, positively and negatively. Some positive examples are that AI powered machines can understand many languages, can speak many different accents, are never tired or grumpy and may be cheaper.

In 1997, IBM Watson made the start to a big step in AI. For the first time, a machine won against a human in chess. A programmer programmed all the moves, and the robot didn’t need AI, let alone ML and DL. 19 years later, another exciting game was played. In an even more complex game than chess, the Japanese game ‘Go’, a robot won against world champion Lee Sedol. In the game ‘Go’, however, Google faced a big problem. Go has too many possible moves to programme. So, Google programmers used AI: they programmed the rules and objective of the game and based on that AI won. Later, AlphaGo lost against AlphaGo0. Both robots used AI but AlphaGo0 was even more advanced. AlphaGo0 learnt the rules by observing AlphaGo.

Will AI powered machines replace workers?

How much time could be saved by using AI in the future? McKinsey compared which skills that humans have will be easiest to replace in the future. The skills which would be easy to replace include predictable physical work (building cars is already being replaced) and collecting and processing data (because this is what robots do all the time, such as calculator). On the other hand, the four activities which would not be easily replaced are management, expertise (applying judgement), interface (interacting with people) and unpredictable physical work (e.g. caretakers). The research group discovered that less than 10% of jobs can be fully automated, but more than 50% of work activities can be automated.

What will the future look like?

The following jobs will be in high demand: care providers, educators, managers, professionals and creatives. So, if you were interested in being doctors, teachers, scientists, engineers, programmers or artists, you are less likely to be replaced by robots. AI will also take away jobs, however such as customer interaction and office support. Waiters and IT helpdesks will not be so promising careers anymore (robots will fix robots!).

There are three main reasons why these jobs will be automated: save costs, provide better customer services and offer entirely new skills. The main reason is better services. Saving costs also plays a big role, e.g. for building cars.  And oil and gas islands will be taken over by robots because it is less dangerous for robots, who can go to most places.

In conclusion, AI is already taking over some elements of jobs. As the technology progresses, however, many more jobs may be automated.

The safest jobs are the ones with social skills.

(source: report by Susan Lund from McKinsey: https://www.mckinsey.com/~/media/McKinsey/Global%20Themes/Future%20of%20Organizations/What%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/MGI-Jobs-Lost-Jobs-Gained-Report-December-6-2017.ashx )

Follow @STEAM_WHS on Twitter

The Rapid Growth of Artificial Intelligence (AI): Should We Be Worried?

By Kira Gerard, Year 12.

“With artificial intelligence we are summoning a demon.” – Elon Musk

In 2016, Google’s AI group, DeepMind, developed AlphaGo, a computer program that managed to beat the reigning world champion Lee Sedol at the complex board game Go. Last month, DeepMind unveiled a new version of AlphaGo, AlphaGo Zero, that mastered the game in only three days with no human help, only being given the basic rules and concepts to start with. While previous versions of AlphaGo trained against thousands of human professionals, this new iteration learns by playing games against itself, quickly surpassing the abilities of its earlier forms. Over 40 days of learning by itself, AlphaGo Zero overtook all other versions of AlphaGo, arguably becoming the best Go player in the world.

Artificial intelligence is defined as a branch of computer science that deals with the simulation of intelligent behaviour in computers, allowing machines to imitate human behaviour in highly complex ways. Simple AI systems are already wide-spread, from voice-recognition software such as Apple’s Siri and Amazon Echo, to video game AI that has become much more complex in recent years. It plays a key role in solving many problems, such as helping with air traffic control and fraud detection.

However, many people are concerned with the continued advancement of artificial intelligence potentially leading to computers that are able to think independently and can no longer be controlled by us, leading to the demise of civilisation and life as we know it. In 2014 Elon Musk, the tech entrepreneur behind innovative companies such as Tesla and SpaceX, stated in an interview at MIT that he believed that artificial intelligence (AI) is “our biggest existential threat” and that we need to be extremely careful. In recent years, Musk’s view has not changed, and he still reiterates the fear that has worried humanity for many years: that we will develop artificial intelligence powerful enough to surpass the human race entirely and become wholly independent.

As demonstrated in a multitude of sci-fi movies – 2001: A Space Odyssey, The Terminator, Ex Machina, to name a few – artificial intelligence is a growing concern among us, with the previously theoretical concept becoming more and more of a reality as technology continues to advance at a supremely high pace. Other scholars, such as Stephen Hawking and Bill Gates, have also expressed concern about the possible threat of AI, and in 2015 Hawking and Musk joined hundreds of AI researchers to send a letter urging to UN to ban the use of autonomous weapons, warning that artificial intelligence could potentially become more dangerous than nuclear weapons.

This fear that AI could become so powerful that we cannot control it is a very real concern, but not one that should plague us with worry. The current artificial intelligence we have managed to develop is still very basic in comparison to how complex a fully independent AI would need to be. AlphaGo’s Lead Researcher, David Silver, stated that through the lack of human data used, “we’ve removed the constraints of human knowledge and it is able to create knowledge itself”. This is an astonishing advancement, and signals huge improvements in the way we are developing artificial intelligence, bringing us a step closer to producing a multi-functional general-purpose AI. However, AlphaGo Zero’s technology can only work with tasks that can be perfectly simulated in a computer, so highly advanced actions such as making independent decisions are still out of reach. Although we are on the way to developing AI that matches humans at a wide variety of tasks, there is still a lot more research and development needed before advanced AI will be commonplace.

The artificial intelligence we live with every day is very useful for us, and can be applied in a variety of ways. As addressed by Mr Kane in last week’s WimTeach blog, technology has an increasing role in things such as education, and we are becoming ever more reliant on technology. Artificial intelligence is unquestionably the next big advancement in computing, and as Elon Musk stated in a recent interview: “AI is a rare case where I think we need to be proactive in regulation instead of reactive… by the time we are reactive in regulation it is too late.” As long as we learn how to “avoid the risks”, as Hawking puts it, and ensure that we regulate the development of such technologies as closely as we can, our fears of a computer takeover and the downfall of humanity will never become reality.