By Benjamin Dynkin & Barry Dynkin
Our eyes were the gatekeepers between fact and fiction, reality and myth - then the internet came along. The visual information we encounter and interact with on the web is digitally created and manipulated - and we’re not ready for it.
In the domain of email-based fraud, perpetrators have evolved beyond broad, “Nigerian Prince”-esque campaigns. No longer are they limited to crude schemes that are easily detected.
Instead, they are using sophisticated, targeted campaigns that combine social engineering with visual deception and manipulation. The goal is to generate sensory overload and trick individuals into divulging critical information, such as usernames and passwords, or to overcome their resistance with psychological pressure and shock tactics, as documented in this research report about the psychological mechanisms used in ransomware splash screens.
Source: De Montfort University / Sentinel One
Phishing campaigns, for example, are crafted to replicate the precise look, feel, and interactivity of any targeted website. This allows attackers to fool all but the most careful, cautious, sophisticated eyes in a matter of seconds.
Not too long ago, we may have felt confident in our ability to spot a fake or illegitimate URL before clicking to ensure its legitimacy. But taking content we receive through email at face value is no longer an option.
As an example,
www.paypal.com is very clearly different from
https://1drv.ms/xs/s!Av1UTWuihweod-TkgS78WJR5_oujx5gA. But crooks have found a way to fool our eyes anyway. The following text:
www.xn--slesforce-q1a.com can be rendered - by using Punycode (which converts Cyrillic text to Latin text) - as
www.sàlesforce.com, which closely resembles the domain of CRM software giant Salesforce (note the use of à vs. a).
If we clicked the link or copied and pasted it to open the page in a local browser, it could lead to a malicious website. A mere accent mark (in some cases such marks are almost completely indiscernible) allows criminals to lure users into a dangerous trap, no matter the amount of awareness training these users may have received.
To see the differences between potential websites, let’s look at an example of a phishing splash page. That’s the webpage victims arrive at after clicking a spiked link, usually set up to harvest their credentials.
Take a look at the images below. The first screenshot (left) shows PayPal’s login screen (Figure 1), which can be found at:
https://www.paypal.com/signin. The second screenshot was taken on a known malicious site, which was used for phishing PayPal information (Figure 2) at
Though the links are different, the sites are, to the pixel, visually indistinguishable. Thus, with a bit of Punycode, we get sent off to a malicious site, which is indistinguishable from the real paypal.com.
The target won’t realize the fake until after it’s too late.
While email is still the predominant vector for such campaigns, perpetrators increasingly branch out to social media to stage attacks that combine social engineering with visual deception.
Email-based attacks generally have one of two missions. They
- deliver a malicious payload or
- gather credentials to facilitate further intrusion.
Attacks leveraging social media services can have a much broader scope. On social media platforms, a wide array of social engineering tricks can be used.
Once attackers have invaded the victim’s inner circle on a platform like Facebook or Linked, this allows them to gather intelligence on a target, influence opinions, or affect an individual’s actions.
Social media substantially increase the menu of options for abuse. Attackers can create fake identities and impersonate real people with ease and interact with a wide range of targets using assumed identities, all the while relying heavily on the victims’ willingness to trust their eyes and visual cues more than they should.
On social media, a picture is worth a thousand lies.
A little bit of information allows a fraudster to impersonate almost anyone, to a degree that would be imperceptible to most individual targets.
Also, if someone does not seek to impersonate a specific individual, they can still effectively build a great deal of social credibility with a small kernel of visual deception.
Need an example? Just think of all the star-spangled banners, bald eagles, random baby pictures and vacation photos stolen from real people’s profiles that allowed Russian agents on Facebook to pass themselves off as American patriots in the runup to the 2016 Presidential elections.
What does this mean for IT security teams and the users they’re tasked to protect? If we can’t trust our eyes, we must shield them from exploits with methods that don’t rely on users’ ability to visually verify trustworthiness.
Since we have established that we cannot rely on individuals to parse threats, even if they are adequately trained and risk-aware, we must develop policies, procedures, and technologies that acknowledge the basic truth that end-users are, ultimately, fallible.
Facing this fact and the likelihood of compromise through any number of attacks that rely on social engineering and visual deception should lead us to focus on prevention. This would also require a fundamental re-conceptualization of the role end users play in the wider security ecosystem.
Isolate users from web-borne threats.
Why leave them exposed to threats that they’re not equipped to handle at all?
Instead, for example, IT could provision login apps that have the credentials to access certain apps and web services baked in, as part of a centrally managed security solution.
This method has been found to reliably protect employees from unknowingly exposing themselves and their organization to phishing attempts. The reason is simple: Users cannot get snookered into handing over passwords that they don’t know.
Employees should not be expected to serve as the impenetrable last line of defense for their employer’s IT network. The increasing risk from sophisticated threats that exploit the gullibility of humans mandates taking users out of the line of fire.