Previous Posts(0)
January 2020 (10)
December 2019 (8)
November 2019 (17)
October 2019 (16)
September 2019 (9)
August 2019 (10)
July 2019 (8)
June 2019 (11)
May 2019 (15)
April 2019 (13)
March 2019 (13)
February 2019 (17)
January 2019 (13)
December 2018 (7)
November 2018 (12)
October 2018 (12)
September 2018 (12)
August 2018 (15)
July 2018 (6)
June 2018 (16)
May 2018 (6)
April 2018 (9)
March 2018 (12)
February 2018 (6)
January 2018 (5)
December 2017 (9)
November 2017 (10)
October 2017 (6)
September 2017 (13)
August 2017 (6)
July 2017 (6)
June 2017 (11)
May 2017 (12)
April 2017 (6)
March 2017 (12)
February 2017 (15)
Blog Topics(0)
Records 1 to 1 of 1
Research trio advocate more work on AI security

What if someone hacked a traffic sign with a few well-placed dots, so your self-driving car did something dangerous, such as going straight when it should have turned right?

Don’t think it’s unlikely – it’s already happened – and an Okanagan College professor and his colleagues from France are among those saying that researchers have to invest more effort in system design and security to deal with hacks and security issues.

Youry Khmelelvsky

A research paper, co-authored by Okanagan College Computer Science Professor Dr. Youry Khmelevsky, and presented recently at an international conference held by the Institute of Electrical and Electronics Engineers (the world’s largest technical professional society), summarizes the research that has already been done into the threats and dangers associated with the machine-learning processes that underpin autonomous systems, such as self-driving cars.

Their paper also points to the needs to take research and tool development for “deep learning” to a new level. (Deep Learning – DL - is what makes facial recognition, voice recognition, and self-driving cars possible. Deep Learning systems mimic neural networks – like your brain – that can take data and process it based on information processing and communication patterns. For a good description of how artificial intelligence, machine learning and deep learning connect to each other and the role they play in our daily lives, click here.)

The paper was authored by Dr. Gaétan Hains, Arvid Jakobsson (of Huawei Parallel and Distributed Algorithms Lab at the Huawei Paris Research Centre) and Khmelevsky. “Safety of DL systems is a serious requirement for real-life systems and the research community is addressing this need with mathematically-sound but low-level methods of high computational complexity,” notes the trio’s paper. They point to the need for significant work yet to be done on security, software, and verification to ensure that systems relying on deep learning are as safe as they could be.

“It sounds very abstract,” says Khmelelvsky, “but it isn’t. It’s here today whether it’s in your car or a device that recognizes your voice and commands.”

"Deep Learning-based artificial intelligence has had immense success in applications like image recognition and is already implemented in consumer products,” notes Jakobsson. “But the power of these techniques comes at an important cost compared to ‘classic algorithms’: it is harder to understand why they work, and harder to verify that they work correctly. Before deploying DL based AI in safety critical domains, we need better tools for understanding and exhaustively exploring the behaviour of these systems, and this paper is a work in this direction."

Do Hains, Jakobsson and Khmelevsky have the answer to prevent hacks that could send your car going straight, when it should go left? Not yet, but they are developing some research proposals that could help ensure that your car, and its systems based on artificial intelligence, don’t get fooled.

“Safe AI is an important research topic attracting more and more attention worldwide,” says Hains. “Dr. Khmelevsky brings software engineering expertise to complement my team's know-how in software correctness techniques. We expect to produce new knowledge and basic techniques to support this new trend in the industry.”