Skip to content

In Ctrl: How much help do you want from your computer?

9 Aug, 2012
Screengrab of Rosie the robot maid from The Jetsons

Helpful computer?

When I tap my phone’s sound settings icon, it automatically switches to silent mode before I get to the menu to do it myself. The first time this happened, I was rather unnerved but over time, I have become more comfortable with my phone’s ability to predict what I’m about to do and complete the action for me. It feels normal, even helpful. But as computers get better at anticipating our intentions, will we see it as an obvious boon, increasing efficiency, or as a worrying loss of human control? What if my phone misinterpreted my intentions and, instead of going silent, played my favourite tune on loudspeaker in the middle of a meeting? Would it be my fault?

The experience of controlling our own actions and their effect on the world is called agency. It is what allows us to be confident when we say “I did that” and it is a fundamental aspect of human existence. Loss of agency can be trivial and fleeting, such as when my phone first silenced itself, or extremely serious and debilitating: some people with schizophrenia, for example, feel their actions are not their own but dictated by some external force.

Dr David Coyle, currently a lecturer in human computer interaction at the University of Bristol, has been testing the extent to which the way we interact with a computer can affect our sense of agency. Some findings from his work at the University of Cambridge, in collaboration with other researchers in computing science and psychiatry, were presented at a recent conference on Human Factors in Computing Systems in Austin, Texas.

Measuring agency is not entirely straightforward but there is a standard method for assessing it, as I found out when I tried one of the experimental procedures for myself.

I did this!

Screenshot of the agency experimental protocolIt was a simple task: move the pointer from a red box in the centre of the screen to one of two targets – green dots at the sides of the screen – and click on it. A beep followed and I had to estimate the time between clicking the dot and hearing the beep. As I repeated the task, the degree to which the computer helped me put the pointer on the dot increased until it reached such a level that it would probably have been harder to miss than to hit.

Why did I have to estimate the time interval before the beep? When asked to perform a voluntary, intentional action (such as clicking on a dot) that has an outcome (such as a beep), people tend to think that the action happened slightly later than it did and the outcome slightly earlier, reducing the perceived time interval between them. When doing a similar action unintentionally, our perceptions are reversed and the interval seems longer than it really is.

In the task, therefore, my estimate of the length of time between click and beep (which varied in length between 0 and 1 seconds) was a measure of the extent to which I felt in control of the pointer’s movement.

Computers have for many years had options for enhancing the pointer’s function: making it ‘snap’ to the nearest button on the screen means you don’t have to be as accurate when using the mouse or trackpad. In this experiment, an algorithm effectively added ‘gravity’ to the green dots, pulling the pointer towards them. Adjusting the algorithm increased the force of ‘gravity’ from none – normal, no assistance – to mild, moderate or high.

As I worked my way through the levels, I could feel the effect growing. I no longer had to concentrate on placing the pointer on the dot, I only had to get it close. By the end, when assistance was highest, just starting to move the pointer to one side or the other meant it ended up on the relevant dot quicker than I could have hoped to do it with mere human hand-eye coordination. One flick of the mouse and the pointer arrived in position and I could click.

Because I knew the purpose of the experiment – and a bit about how it worked – the data from my run can’t be used in any meaningful analysis. But the results that Coyle and his colleagues got from their participants showed that people tended to underestimate the time interval between the click and the beep when there was no or mild assistance. With mild assistance, they were aware that the computer was helping them to complete the task but did not experience any loss of agency.

However, when the level of assistance increased to moderate or high, they started to overestimate the time interval, suggesting they now felt they were not in control of the pointer’s movement. This was despite the fact that they were still choosing whether to move it to the left- or the right-hand dot, and they still had to click on the dot. Clearly, a line had been crossed where the users’ sense of agency switched from themselves to the computer.

‘What’s it like to be a button?’

In another experiment, Coyle and his colleagues compared two means of performing an action on the computer. In addition to the usual mouse, trackpad or touchscreen, new devices to interact with computers are being developed. One type uses a microphone to pick up vibrations in the skin of a person’s arm – tapping your arm replaces clicking the mouse button.

Comparing a standard button with the new ‘Skinput’ device, the researchers found that people felt a greater degree of agency when using their arm instead of the mouse.

What do these experiments tell us about agency and our interactions with computers and other technologies? They certainly suggest that there is a point at which people’s experience of ‘helpful’ computers changes – we switch from feeling in control to feeling less engaged with the outcomes of what are still, essentially, our actions. This could have important repercussions in future computer and software design.

However, the degree to which we lose our sense of agency may be influenced by the way we interact with computers – perhaps using devices more closely connected to our bodies would help us to retain agency even with greater assistance from our computer.

Tasks such as the one I tried are simple and have no wrong outcome: it will be interesting to see how our sense of agency is affected when there is the possibility of making mistakes, when there is blame to be apportioned. There are many more experiments like this that could help explore agency in human-computer interactions. Coyle’s interdisciplinary approach could become a standard for use in the computing industry and the academic community alike, where understanding agency could have applications in psychiatry and neuroscience as much as in developing new computing technology.

Notes and reference

The research paper is available at: http://dx.doi.org/10.1145/2207676.2208350

The research was carried out in collaboration with Paul Fletcher, a Wellcome Trust Senior Research Fellow at the Behavioural and Clinical Neuroscience Institute (BCNI), Alan Blackwell, a Reader Interdisciplinary Design at the Computer Laboratory, James Moore, previously of the BCNI and now a Lecturer in the Department of Psychology at Goldsmiths University of London, and Per Ola Kristensson of the School of Computer Science, University of St Andrews.

Other funding came from the Engineering and Physical Sciences Research Council, the Bernard Wolfe Health Neuroscience fund and an IRCSET-Marie Curie International Mobility Fellowship held by David Coyle, whose research was also supported by Trinity College Dublin.

Top image credit: The Jetsons unofficial home

One Comment leave one →
  1. 9 Aug, 2012 4:55 pm

    A mouse with no buttons? Sounds fun… We want to control our computer with our minds!!! :D

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: