Technology – the new research frontier?

Categories and tags:
Applied Psychology
technology

By Dr Kim Schenke

The face of psychology has evolved greatly since its first mention around the 18th century (according to Google), though I’m not sure it has lost some of its early connotations. Many still believe it to be a ‘soft’ science, if a science at all. One reason for this is probably the way it is talked about in mainstream media and the various entertainment shows around it. This ‘pop’ psychology where psychologists are magical mind readers psychoanalysing everyone they meet is frustrating for those who study it and has led to many a meme:

psychology face

The reality is that all cognizant beings are amateur mind readers in some respect – to communicate with others we must, on some level, try to read their reactions. But psychology as a discipline is far more than a load of body language experts (indeed, as those who have studied psychology will know, neither body language analysis or mind reading tend to account for much of the psychology syllabus)!

But enough about what psychology is not. Psychology, like all sciences, is constantly evolving – we are constantly learning (or unlearning) new information every day. Importantly, we are constantly evolving our methodology – how we test our (many) hypotheses. One of the greatest driving forces for this evolution is technology.

Technology has changed the face of psychology and allowed the exploration of previously unimaginable constructs. For example, the advent of computers allowed us to show stimuli and gauge accurate participant response times with millisecond precision resulting in a wide range of knowledge about even the most basic, low level perceptual, attentional and motor influences driving our behaviour. Not to mention the complex calculations and analyses that computers can do in seconds. Whilst many psychology students may shudder at the mention of IBM SPSS (other statistical software is available), it has taken on the burden of performing all sorts of complex analyses; you think statistics classes are tough now – imagine having to perform all those calculations by hand (suddenly it does not seem so bad after all does it?)

The invention of neuroimaging technologies has meant that we can now investigate brain activity to a very high spatial and/or temporal resolution. We can now investigate, for example, common areas of the brain affected by various phenomena, and time-related responses to subliminal stimuli. Indeed, technological advances have revolutionised the field, but one thing that does not always update along with these advances is how we conduct research itself. We need to ensure our methods and our thinking is updated alongside this.

The open science movement is integral for our field – by being more open about our research (and our methodology/analyses) we can work together to do science the way science should be done (collaboratively, not hidden away in separate labs). The pure resources (including time and money) that are probably wasted every year by not being open about our research findings is mindboggling. By this I do not mean that there are scientists actively hiding their research practices or fabricating data (though sadly there are cases of this reported), instead I mean the whole system seems to be set up to prevent/reduce open science. The various metrics for measuring ‘success’ in academia tend to be set up to reward (or not) the individual, rather than the team. To publish research there is an air of not revealing the most interesting ideas/findings until you have a solid publication (which can take years) because journals tend to want to publish novel findings, so researchers do not want to risk someone beating them to the publication. Yet this is all wrong. Science cannot (and should not) be conducted in secret. Researchers working in isolation will not identify the most important discoveries. With practices changing alongside technological advancements (which are coming thick and fast) this has never been more important. For example, virtual (and augmented) reality are becoming more commonplace in research – the ability to potentially create more ecologically valid tests whilst still maintaining experimental control is a very appealing option for many researchers. However, we need to work together to ensure the highest quality for these new research avenues and to agree the best standards/practices for using this new technology.

One area where such a need is most important is artificial intelligence (AI) and robotics. For a long time, films and TV programmes have been dedicated to science fiction – what might humanity look like in the future? What amazing discoveries might we make? What weird and wonderful technologies might support us in the future?  Robotics and AI tend to be a key focus of such ponderings. Perhaps this began with people trying to make life easier for us mere mortals – can we create technology that can do those jobs we do not want to/cannot do ourselves? This I can get on board with – a robot vacuum that does the cleaning for you sounds pretty awesome to me. But, there seems to be a fascination with whether we could create an AI that could be human-like – a humanoid. TV shows like Humans imagine such inventions – a synthetic creation that could have human features and even think like (or beyond) a human. However, as with many sci-fi concepts, the reality of actually creating such beings is layered with difficulties. For a start, if we are to create these synthetics, we first need to answer the question what does it mean to be human?

Assuming we could answer this question, it is still unlikely that we will see such creations any time soon. Whilst many labs around the world are investigating this, there is a struggle to capture even the most basic of human capabilities such as the ability to get a synthetic device to walk like a human – something about humans we actually have a strong understanding of. To then consider actually being able to construct a device that could have consciousness is difficult to imagine when we do not even have a clear understanding of what consciousness really is in the first place.

To me, the more interesting question is why do we want to create such a synthetic being? What would be the purpose? Indeed, as is brought up in the TV show Humans, how would we define these beings? Would they have the same rights as humans if they can think and feel like a human can? If there are labs around the world trying to create this, then we need to start discussing these important ethical issues. Again, another reason it is so important to be open and collaborative in science.

Now whilst it is quite unlikely to see fully fledged humanoid robots (the kind you could realistically mistake for a human in both appearance and behaviour) any time soon, there have been some promising (or perhaps quite scary depending on your perspective) AI developments in recent years. The advances in computer processing powers means that people are developing ‘supercomputers’ that can process more data in a few milliseconds then perhaps the average human could process in a year. Indeed, there are many researchers now focused on cars that could navigate between locations without the need for human intervention. But, as with any new technology/concept, there have been major difficulties/setbacks to put it lightly. As such, we are long overdue a discussion on the ethical implications of creating AIs/robots.

So often with big scientific developments we seem to skip the part where we debate the need/use/pros and cons of such a concept or use of technology. Often new technologies are accompanied by old ways of thinking/doing, things like ethics and design issues tend to be left behind in the excitement of the growing potential. Take social media for example – so many people are so keen to share their lives with their friends and family online, they do not think about the consequences of sharing their information with the rest of the world as well. Rather than wishful thinking with hindsight about how we wished we had acted, we should be trying to foresee these kinds of issues and thinking carefully about the best approaches as the technology develops. This is particularly important when considering research design – before the technologies get any more advanced we need to try to futureproof research as much as possible. For example, are there likely to be any ethical considerations we will need to take into account? How will we assess the safety of these new measures? How do we need to adapt our methodology, or ways of thinking, in line with these new technologies? To go back to our self-drive car example, how should we ‘teach’ these AI’s? What morals should we instil them with? The obvious issues with the self-drive car is who should it be programmed to save in the event of an inevitable crash? There is a lot of research investigating this at the moment, but who should have the final say?

It’s an exciting time in psychology (and science as whole). Technology is allowing us to explore ever more complex hypotheses about ever more complex behaviours. Just remember theories are only as good as the evidence they are based on. We need to think carefully about the ethical and design issues of using technology in our research, and we need to start having more open and honest conversations about research.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.