For many of us in the security industry we tend to focus on the underlying vulnerabilities and threats that comprise risk. These are typically at a very granular level in applications, such as buffer overflows, cross site scripting, privilege escalation, etc.
These types of vulnerabilities can lead to malicious activity by threat actors aiming to do individuals or organizations harm. We attempt to assess the resultant implications to understand their impact on security posture and personal or information privacy.
But what happens when the malicious activity broadens beyond the organizational level and effects towns, cities, countries, or even the whole world?
This, of course, isn't hypothetical. We're seeing the real-time effects play out in the trifecta of Facebook, Russian election meddling and deep data exfiltration by the likes of Cambridge Analytica. The result isn't a hacked mobile application, or a server with a backdoor trojan installed, but rather a perturbation of social activity to incite anger, fear, and potentially even violence in the network constituents.
In a 2015 IEEE paper titled 'Malicious behavior in online social networks', they state the following:
"Malicious behavior in Online Social Networks includes a wide range of unethical activities and actions performed by individuals or communities to manipulate thought process of OSN [online social network] users to fulfill their vested interest. Such malicious behavior needs to be checked and its effects should be minimized."
They clearly articulate what the actual risk is: Manipulation of thought process.
We have decided to dub this `The Malicious Network Effect`, and it feels like a step towards a very undemocratic and Orwellian future in which a select few can manipulate the thoughts and activities of the many (the prototypical imbalance of security). In a sense, Facebook, and the extensive user data it carries, gives rise the possibility to use language as a weapon of mind control against an audience of over 2 Billion. A scary thought indeed, and one that has caused the US Consumer Protection Agency to launch a probe against Facebook and the extensive data leakage across millions of citizens.
The problem is a tough one to address though. Existing security detection mechanisms have significantly evolved over the last decade, moving far beyond the old-school hash matching of antivirus into much more nuanced algorithmic, contextual and machine-learning based approaches. Even so, the process of detecting specific social media accounts that are inciting negative behavior or posting fake news enters into a whole new realm of malicious detection.
In a joint paper between Vanderbilt University, Amherst College and Army / Naval Research, they use game theory mechanics to model Adversarial Classification on Social Networks as a Stackelberg Game and subsequently develop an experimental programmatic model to potentially detect this type of behavior. The efficacy of their model at the scale of 2 Billion people is anyone's guess, but it's a step in the right direction when Mark Zuckerberg himself has admitted to the seriousness in the problem and even recently apologized for Facebook's lack of response in light of the Cambridge Analytica data mining.
This kind of vulnerability, if we can even call it that, requires a whole new level of thinking. Moving beyond a single app, a single system, or even a closed network, to The Malicious Network Effect on countries and the population at large. We don't have the answers here, but it's a trend in maliciousness and risk that we are keeping a close eye on.
Today it's Facebook, however the advertising technology that runs deep inside Facebook also runs inside Google, or Twitter, and out to the billions of mobile devices and applications. It seems an easy jump to go from leveraging social networks to manipulate thought to releasing viral mobile applications that also do the bidding of this new type of adversary. The possibility to coerce is ever present.
So what do we do when the very networks that were created to connect and unite us become the weapon to that turns us against ourselves?
We must continue to develop algorithms that detect this type of behavior. The owners of large social networks and advertising technology must do their part to ensure their networks are not being used maliciously. And lastly, as we watch the rest of 2018 unfold and prepare for the midterms, we must all be vigilant in determining the source and truthfulness of the information we consume.
The Team @ Mi3 Security