How should we really feel about the arrival of artificial intelligence in the classroom?

Claire Boyd, Head of Junior School, reflects upon the emotional response pupils, teachers and parents may experience as artificial intelligence, big data and augmented reality looks sets to change the educational landscape.

 

It seems to me that we can be in no doubt that the educational zeitgeist of the moment is the potential that artificial intelligence, big data and augmented reality holds for education. As Ben Turner, Assistant Head Pastoral, explored on the pages of this blog last month, the take-home message from the keynote speakers at September’s Grow 2.0 conference, was the unprecedented scope of new technologies to create a bespoke, tailored learning experience for individual learners.

The same sentiments were echoed by Priya Lakhani, CEO and founder of Century Tech, on stage at the annual conference of the Independent Association of Prep Schools in central London a fortnight ago. Patel evangelised on the capacity AI holds to democratise education and remove silos of learning from classrooms around the world.

Much closer to home, my path across the playground from the Junior School to Senior School each day serves as a compelling reminder of the exciting and progressive space Wimbledon High School is giving to pursue a new, non-binary approach to teaching the disciplines of science, technology, engineering, art and maths; creating space for new approaches of innovation and collaboration to flourish through Project Ex Humilibus. The cumulative effect of these realities cast a different landscape of education with which many parents and teachers in our school community will identify. Whilst recognition that the traditional ‘factory’ model of education, in which knowledge and skills are played off against one another as learners are spoon-fed a linear curriculum of discrete subject teaching, is outmoded and anachronistic, there is little clarity on the reality of what an AI, data-led education system will offer is not yet certain.

Pupils experiment with VR headsets at the recent WHS Grow 2.0 Conference

With wearable technologies and virtual home devices such as Google Home and Alexa becoming increasingly commonplace in households around the country as well as ‘customers who bought this also liked’ recommendations par for course of our online shopping experiences, the creeping pervasiveness of data driven AI devices is changing the face of many of our everyday transactional experiences beyond recognition. Heralded for the part these developments play in increasing convenience and expediting smart-living, the contribution these new products are making to modern life is widely celebrated across mainstream society. When the capacity and functionality of these data-driven technologies are applied to the educational realm, I wonder how closely the positive reception will be matched? How comfortable are we with the diagnostic skills of AI-driven learning technologies being applied to the classroom environment, taking up residence in the learning space traditionally driven by the expertise of the teacher as the professional?

Throughout history, schools, universities and other educational institutions have provided space for progressive ideas to germinate and new approaches to intellectual and emotional development to evolve. With this in mind, can it not be that the advantages afforded to us in our personal lives by big data and AI can be replicated in the spheres of learning and education? The potential for AI to collect, collate and analyse the data of individual learning profiles and build personalised learning pathways of attainment, progress and development is so large in scale that the possibilities for bespoke education is infinite.

The advent of AI in education should also not be feared because of what it represents for teachers and the teaching profession more widely. Instead of viewing this new frontier with apprehension or scepticism, we should, as school communities, feel excited and energised about what lies ahead. This is because, when most people are asked to consider the favourite teacher of their school days, their responses will most commonly focus upon the way that teacher made that individual feel; the way the believed in them and empowered them to achieve an ambition or succeed at something they would not otherwise have achieved. It is this capacity for relationship creation, based on the nuances of emotional intelligence and the domain of human-specific skills such as meta-cognition and social intelligence that human and the artificial hold the potential to equip the children of today with the springboard for the unfettered success in the future. In the words of UCL’s Prof Rose Luckin, the “holy grail for education in the future is accurately perceived self-efficacy”.

How is the Turing Test Relevant to Philosophy?

Kira, Year 13, looks at the Turing test and how criticisms of it bring new ideas and concepts into the philosophy of mind.

Alan Turing
Alan Turing

As emerging areas of computer science such as Artificial Intelligence (AI) continue to grow, questions surrounding the possibility of a conscious computer are becoming more widely debated. Many AI researchers have the objective of creating Artificial General Intelligence: AI that has an intelligence, and potentially a consciousness, similar to humans. This has led many to speculate about the nature of an artificial mind, and an important question arises in the wake of this modern development and research: “Can computers think?”

Decades before the development of AI as we know it today, Alan Turing attempted to answer this question in his 1950 paper Computing Machinery and Intelligence. He developed the famous Turing test as a way to evaluate the intelligence of a computer. Turing proposed a scenario in which a test subject would have two separate conversations: one with another human, and one with a machine designed to give human-like responses. These conversations would take place through a text-channel so the result would not be affected by the machine’s ability to render speech. The test subject would then be asked to determine which conversation took place with a machine. Turing argued that if they are unable to reliably distinguish the machine from the other human, then the machine has ‘passed the test’, and can be considered intelligent.

At the start of his essay, Turing specifies that he would not be answering “Can computers think?”, but a new question that he believed we are able to answer: “Are there imaginable digital computers which would do well in the imitation game?” However, Turing did believe that a computer which was able to succeed in ‘the imitation game’ could be considered intelligent in a similar way to a human. In this way, he followed a functionalist idea about the mind – identifying mental properties though mental functions, such as determining intelligence through the actions of a being, rather than some other intrinsic quality of a mental state.

Many scholars have criticised the Turing test, such as John Searle, who put forward the Chinese Room Argument and the idea of ‘strong AI’ to illustrate why he believed Turing’s ideas around intelligence to be false. The thought experiment looks at a situation where a computer is produced that behaves as though it understands Chinese. It is, therefore, able to communicate with a Chinese speaker and pass the Turing test, as it convinces the person that they are talking to another Chinese-speaking human. Searle then asks whether the machine really understands Chinese, or if it is merely simulating the ability to speak the language. The first scenario is what Searle calls ‘strong AI’, referring to the latter as ‘weak AI’.

In order to answer his question, Searle illustrates a situation in which an English-speaking human is placed in a room with a paper version of the computer program. This person, given sufficient time, could be handed a question written in Chinese and produce an answer by following the program’s instructions step-by-step, in much the same way as a computer does. Although this person is hence able to communicate with somebody speaking Chinese, they do not actually understand the conversation that is taking place, as they are simply following instructions. In the same way, a computer able to communicate in Chinese cannot be said to understand the language. Searle argues that without this understanding, a computer should not be described as ‘thinking’, and as a result should not be said to have a ‘mind’ or ‘intelligence’ in a remotely human way.

Searle’s argument has had a significant impact on the philosophy of mind and has come to be viewed as an important argument against functionalism. The thought experiment provides opposition to the idea that the mind is merely a machine and nothing more: if the mind were just a machine, it is theoretically possible to produce an artificial mind that is capable of perceiving and understanding all that it sees around it. According to Searle, this is not a possibility. However, many people disagree with this belief – particularly as technology develops ever further, the possibility of a true artificial mind seems more and more likely. Despite this, Searle’s Chinese Room argument continues to aid us in discussions around how we should define things such as intelligence, consciousness, and the mind.

In this way, both the Turing test and Searle’s critique of it shed new light onto long-standing philosophical problems surrounding the nature of the human mind. They serve to help bring together key areas of computer science and philosophy, encouraging a philosophical response to the modern world, as well as revealing how our new technologies can impact philosophy in new and exciting ways.

Artificial Intelligence & Art: A Provocation – 14/09/18

Rachel Evans, Director of Digital Learning and Innovation at WHS, looks at the links between Art and Artificial Intelligence, investigating how new technology is innovating the discipline.

What is art? We might have trouble answering that question: asking whether a machine can create art takes the discussion in a new direction.

Memo Akten is an artist based at Goldsmith’s, University of London where much exciting work is taking place around the intersection of artificial intelligence and creative arts.

Akten’s work Learning to see was created by first showing a neutral network tens of thousands of images of works of art from the Google Arts Project.  The machine then ‘watches’ a webcam, under which objects or other images are placed, and uses its ‘knowledge’ to create new images of its own. This still is from the film Gloomy Sunday. Was it ‘thinking’ of Strindberg’s seascape?

I have been fascinated by this artwork since I first saw it and have watched it many times. The changing image is mesmerising as the machine presents, develops and alters its output in response to the input. It draws me in, not only as a visual experience, but for the complex response it provokes as I think about what I am seeing.

Akten describes the work as:

An artificial neural network making predictions on live webcam input, trying to make sense of what it sees, in context of what it’s seen before.

It can see only what it already knows, just like us.

In 1972 the critic John Berger used the exciting medium of colour television to present a radical approach to art criticism, Ways of Seeing, which was then published as an affordable Penguin paperback. In the opening essay of the book he wrote “Every image embodies a way of seeing. […] The photographer’s way of seeing is reflected in his choice of subject. […] Yet, although every image embodies a way of seeing, our perception or appreciation of an image also depends on our own way of seeing.” When Akten writes that the machine “can see only what it already knows, just like us he approaches the idea that the response of the neural network is human-like in its desire to find meaning and context, just as we attempt to find an image which we can recognise in the work it creates.

If the artist is choosing the subject, but the machine transforms what it sees into ‘art’, is the machine ‘seeing’? Or are we wholly creating the work in our response to it and the work is close to random – a machine-generated response to a stimulus not unlike a human splattering paint?

Jackson Pollock wrote “When I am in my painting, I’m not aware of what I’m doing. It is only after a sort of ‘get acquainted’ period that I see what I have been about. I have no fear of making changes, destroying the image, etc., because the painting has a life of its own.” Is the neural network performing this role here for the artist, of distancing during the creative process, of letting the ideas flow, to be considered afterwards?

Is the artist the sole creator, in that he has created the machine? That might be the case at the moment, with the current technology, but interestingly Akten refers to himself as “exploring collaborative co-creativity between humans and machines”.

I find this fascinating and it raises more questions than I can answer: it leaves me wanting to know more. It has prompted me to delve back into my own knowledge and understanding of art history and criticism to make connections that will help me respond. In short – encountering this work has caused me to think and learn.

In the current discussions in the media and in education around artificial intelligence we tend to focus on the extremes of the debate in a non-specific way – with the alarmist ‘the robots will take our jobs’ at one end and the utopian ‘AI will solve healthcare’ at the other. A focus for innovation at WHS this year is to open up a discussion about artificial intelligence, but this discussion needs to be detailed and rich in content if it’s going to lead to understanding. We want the students to understand this technology which will impact on their lives: as staff, we want to contribute to the landscape of knowledge and action around AI in education to ensure that the solutions which will arrive on the market will be fair, free of bias and promote equality. Although a work of art may seem an unusual place to start, the complex ideas it prompts may set us on the right path to discuss the topic in a way which is rigorous and thoughtful.

So – let the discussion begin.