Governments embrace machine-learning to counter online extremism

With pressure growing on tech companies to clamp down on extremist content, governments see possible solutions in ad company AI

Add bookmark

As Defence IQ’s recent report – ‘Countering violent extremism online: 2017’ – examines, there is a demand for new tactics when it comes to fighting the online war against radical groups. While some success has been made, many of the latest terrorist attacks are believe to have been at least partly facilitated through online engagement, be it in recruiting of impressionable people to undertake violent acts or in organising the logistics of these attacks.  

The issue has become a key concern for world leaders. Theresa May told the United Nations general assembly last week that tech companies must go “further and faster” in reclaiming the internet from those with violent intentions. 

SEE ALSO: Is the UK counter extremism model working?

From a technology standpoint, many governments and social media platforms are indeed focusing on developing new tools and resources that can be used to address a broad range of threats to law enforcement and to the public at large. 

“There are a number of different and interesting avenues being built around this,” Defence IQ was told by an insider for social media activity at the National Counterterrorism Center (NCTC) who asked not to be named.  

"There are a lot of lessons we can learn from the way advertising companies approach audience identification and segmentation, and message delivery."

“The hashtag database that was announced by Facebook, Google, Microsoft and Twitter has great potential. And the Redirect method – developed by Google’s Jigsaw and UK-based start-up Moonshot CVE – has shown benefits.” 

But the representative also admitted that the CVE Task Force has extremely limited funds to draw on while they look to develop information-sharing products. Even with the expertise of DARPA (Defense Advanced Research Projects Agency), which has been developing technology in this space to help identify the people behind extremist posts, there is still a limitation on what can be done.

“Truth be told, online CVE is still an area where I think there’s a lot of potential for technological solutions,” the NCTC source explained.  

“One of the most interesting things over the last few years has been the increasing focus on advertising technology, with the likes of tracking software and chatbots. Those solutions are not necessarily a silver bullet, but the way they approach audience identification and segmentation, and message delivery, there are a lot of lessons being learned. We should be looking very closely at that.”

"The last few years have been interesting to watch this technology evolve, but the next few years are going to be even more exciting."

The NCTC representative also pointed to developments now being made in artificial intelligence and machine-learning, calling them a potential “game changer” for CVE over the next few years. 

”It will allow us to get into much better datasets,” the source said.  

“Of course, that means the further we get into that space, the more we will invite a new basket of questions. Who is going to host these huge datasets? Is it appropriate for the government to hold that data? These are the questions on the horizon that we need to begin preparing to answer. The last few years have been interesting to watch this technology evolve, but the next few years are going to be even more exciting.” 

The full report, ‘Countering violent extremism online: 2017’, is now available for download.


RECOMMENDED