Scientists To Examine How AI Can Help People Avoid Cyber Attacks
18 July 2017, 11:25
How people respond to phishing emails and common cyber attacks is to be the focus of a £1 million university research project to improve online security.
Scientists at the University of Aberdeen are working to find ways to prevent hackers enticing people into downloading malware such as that used in recent large-scale attacks which badly affected the NHS across the UK.
The research team believe the main problem faced by big organisations is getting computer users to follow existing security policies.
The project will test how Artificial Intelligence (AI) and persuasion techniques can improve the following of safety advice.
The UK Engineering and Physical Sciences Research Council has awarded the research team £756,000 towards their Supporting Security Policy with Effective Digital Intervention project, which now has total funding of more than of £1 million.
Dr Matthew Collinson, who is the principal investigator on the project, said: ''If we look at most cyber security attacks, there is a weakness relating to human behaviour that hackers seek to exploit.
''Their most common approach, and the one we are most familiar with, is the use of phishing emails to entice a user to download malware on to their computer.
''One of the main problems faced by companies and organisations is getting computer users to follow existing security policies, and the main aim of this project is to develop methods to ensure that people are more likely to do so.''
The project coincides with the launch of a new masters degree in AI at the University of Aberdeen.
Dr Collinson said: ''The project applies our world-leading expertise in both AI and human-computer interaction.
''In the case of human-computer interaction, this specifically relates to the field of persuasive technologies, which are designed to encourage behaviour change and are more commonly applied in healthcare, for example to encourage patients to follow medical advice.
''In terms of AI, we will investigate how intelligent programs can be constructed which can use dialogue to explain security policies to users, and utilise persuasion techniques to nudge users to comply.
''In addition we will be using sentiment analysis to detect people's attitudes to security policies through natural language, for example through their email correspondence.
''Ultimately we are looking to employ all of these techniques to identify the issues that make us less likely to follow security advice, and make recommendations as to how these can be overcome.''