Mark your calendars for the ultimate real estate experiences with Inman’s upcoming events! Dive into the future at Connect Miami, immerse in luxury at Luxury Connect, and converge with industry leaders at Inman Connect Las Vegas. Discover more and join the industry’s best at inman.com/events.

Today’s snake oil salesman is equipped with military-grade psyops technology capable of influencing the most sophisticated, technically plugged-in people on earth. Even the savviest of us are at extreme risk. Why? Because human beings’ nature is to trust by default. Without trust, we wouldn’t survive as a species. 

And when you mix media in any form, media that we have grown to trust over a lifetime, and then weave in financial fraud, the likelihood of someone, somewhere, falling for the AI ruse is inevitable.

Artificial intelligence-based “social engineering” scams are quickly becoming the purest, most effective form of psychological manipulation. Simply put, we are unprepared to deal with what’s already here. 

Deepfake artificial intelligence scams are the digital equivalent of a sociopathic, psychopathic, narcissistic, gaslighting, violent predator. 

It is so bad that OpenAI, the developers who created ChatGPT, recently announced that it terminated accounts associated with state-affiliated threat actors.

Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks. In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services to support malicious cyber activities. We also outline our approach to detecting and disrupting such actors in order to promote information sharing and transparency regarding their activities.” — OpenAI

The “state-affiliated actors” they are speaking of include China, Iran, North Korea and Russian hackers. Banning these bad actors is a temporary fix and will do nothing to stop the problem. 

PT Barnum was wrong

Artificial intelligence deepfakes aren’t the root of the problem here. We are the problem. It is said, “There’s a sucker born every minute.” I’m pretty sure there are approximately 250ish people born every minute. By my calculations, every single one of them are suckers — me and you included.

What does this mean? It means all of us are capable of being deceived. And I’ll bet all of us have been deceived or “suckered.” That’s simply a hazard of “trusting by default.” 

Just look at the evolution of the simple “phishing email scam.” Over the past 20 years, this ruse has evolved from a blanket broadcasted “scammer grammar” communication to an advanced, persistent threat that targets specific individuals by understanding and leveraging all aspects of their personal and professional lives.

We’ve already seen the effects of AI-enabled phishing schemes cause severe damage because they are a dozen times more convincing. 

Everyone must stay informed about the latest scams in this era of rapid technological progress and AI integration. The preceding year, we witnessed a tumultuous landscape in cybersecurity, marked by significant corporations falling victim to malware attacks, ransomware, and the proliferation of opportunities for cybercriminals due to the advancements in AI. 

Regrettably, the forecast indicates a further escalation in the sophistication and prevalence of cyber threats and scams, making it essential for individuals to remain vigilant and proactive in safeguarding their digital assets.

Consider deepfake AI ‘havoc wreaked’ 

The rapid proliferation of deepfake websites and apps is wreaking havoc, unleashing a wave of financial and personal fraud that uniquely threatens individuals and businesses alike.

The proliferation of deepfakes represents a troubling trend fueled by the accessibility and sophistication of AI technology. Even the average technology user possesses tools capable of impersonating individuals given sufficient videos or images.

Consequently, we must anticipate a surge in the utilization of both video and audio deepfakes within cyber scams. It’s already happening. Scammers exploit deepfake videos and/or audio to pose as superiors, soliciting urgent information. 

Similarly, in personal spheres, these manipulative tactics may involve impersonating family members or friends to deceive individuals into divulging sensitive information or extracting funds from one bank account to pay for a kidnapping ransom.

As ridiculous as that sounds, if you heard your daughter’s voice screaming in the background on a distant cellular phone call, you’d likely cough up the cash if you thought your loved one was being held captive. 

The rise of AI-enabled deepfakes presents a formidable challenge in combating financial fraud, providing cybercriminals with unprecedented capabilities. With the aid of AI, cybercrime syndicates can swiftly update and enhance traditional wire transfer fraud tactics alongside sophisticated impersonation schemes.

This rapid evolution jeopardizes the reliability of verification and authorization processes within the financial sector, undermining trust and confidence in financial systems.

This is just the beginning 

In a sophisticated scheme, a finance worker from a multinational corporation fell prey to deepfake technology, resulting in a staggering $25 million payout to impostors posing as the company’s chief financial officer. 

The elaborate ruse unfolded during a video conference call, where the unsuspecting employee found himself surrounded by what appeared to be familiar faces, only to discover they were all expertly crafted deepfake replicas. Despite initial suspicions sparked by a suspicious email, the worker’s doubts were momentarily quelled by the convincing likeness of his supposed colleagues.

This incident underscores the alarming effectiveness of deepfake technology in perpetrating financial fraud on an unprecedented scale. 

This author predicts a scheme such as this will easily invade the real estate mortgage closing process. A significant amount of anonymity is already involved in the closing process utilizing telephone, email and digital signatures. One way to reduce the “anonymous threat” aspect would be to integrate a Zoom meeting with video. And who’s to say that the participants on the video call aren’t AI-generated? 

The deepfake market delves into the depths of the dark web, serving as a favored resource for cybercriminals seeking to procure synchronized deepfake videos with audio for a range of illicit purposes, including cryptocurrency scams, disinformation campaigns, and social engineering attacks aimed at financial theft.

Within dark web forums, individuals actively seek deepfake software or services, highlighting the high demand for developers proficient in AI and deepfake technologies, who often cater to these requests.

Protect yourself and your organization

When encountering a video or audio request, it’s essential to consider the tone of the message. Does the language and phrasing align with what you’d expect from your boss or family member? Before taking any action, take a moment to pause and reflect.

Reach out to the purported sender through a different platform, ideally in person, if possible, to verify the authenticity of the request. This simple precaution can help safeguard against potential deception facilitated by deepfake technology, ensuring you don’t fall victim to impersonation scams.

  1. Stay informed: Regularly educate yourself about common AI-related scams and tactics employed by cybercriminals.
  2. Verify sources: Verify the identity of the sender through multiple channels before taking any action.
  3. Use trusted platforms: Avoid engaging with unknown or unverified sources, particularly in online marketplaces or social media platforms.
  4. Enable security features: Utilize security features such as multi-factor authentication whenever possible to add an extra layer of protection to your accounts and sensitive data. 
  5. Update software: Regularly check for software updates to mitigate vulnerabilities exploited by AI-related scams.
  6. Scrutinize requests: Cybercriminals may use AI-generated content to create convincing phishing emails or messages.
  7. Educate others: Share knowledge and awareness about AI-related scams with friends, family, and colleagues. 
  8. Verify identities: Beware of AI-generated deepfake videos or audio impersonating trusted individuals.
  9. Be wary of unrealistic offers: AI-powered scams may promise unrealistic returns or benefits to lure victims into fraudulent schemes.
  10. Report suspicious activity: Prompt reporting can help prevent further exploitation and protect others from falling victim to similar scams.

None of the above, by itself, will solve this problem. I cannot stress this enough: Organizations and their staff must engage in consistent and ongoing security awareness training, now more than ever. 

Author Robert Siciliano, Head of Training and Security Awareness Expert at Protect Now, No. 1 Best Selling Amazon author, Media Personality & Architect of CSI Protection Certification. While “security” or preventing fraud isn’t the nature of a real estate agent’s business, security is everyone’s business and in your and your client’s best interest. Agents becoming competent in all things “CSI” Cyber, Social, Identity and Personal protection should be a priority. 

This post was originally published on this site