top of page

Support | Tip | Donate

Recent Posts

Featured Post

When Connection Is Engineered: Understanding The Anatomy Of An AI-Powered Romance Scam

  • Writer: The White Hatter
    The White Hatter
  • 10 minutes ago
  • 6 min read


CAVEAT - With Valentine’s Day approaching, a time when many people are more open to connection and less guarded emotionally, the relevance of this article becomes even clearer. The process outlined in this article took less than five minutes from the moment we engaged with AI to the point where an online approach could be initiated. What makes this especially concerning is not just the speed, but the ease with which trust can be manufactured. This type of trust building manipulation is best described as AI based social engineering, where personal details, emotional cues, and shared interests are rapidly identified and used to create a sense of familiarity and safety.


Artificial intelligence is scaling faster than most people realize. Like any tool, it can be used to help or to harm. What matters is how it is applied and how quickly misuse can now happen. In this article, we want to show how AI can be used to manufacture trust as a gateway to online fraud, including romance scams and sextortion, and why this matters for families and adults alike. This is not a how-to guide, it’s a reality check.


Before generative AI, building rapport with a stranger online took time. Someone had to search manually, read profiles, connect dots, and slowly craft messages that felt authentic. That process could take hours or days.


With AI, that same process can happen in minutes and at scale. One person can quietly target dozens or hundreds of people at the same time, each with messages that feel personal and familiar, and that changes the risk landscape in a meaningful way.


Social engineering is the practice of manipulating people into trusting you by reflecting back pieces of their identity, interests, or values. It is not about hacking devices, it’s about hacking human psychology.


AI accelerates this by doing three things extremely well:


1/ Collecting publicly available information at speed


2/ Identifying emotional and social connection points


3/ Generating language that feels warm, normal, and low pressure


None of this requires breaching accounts or doing anything illegal on the technical side. It relies entirely on what people share publicly.


Step One: Information Becomes a Blueprint


In our demonstration, a randomly selected public adult profile was used. Within seconds, we used two commercial free AI platforms, and were able to assemble a surprisingly detailed picture of our target using multiple open and indexed online sources.


That included professional credentials, community involvement, hobbies, and social interests. None of this came from private messages or hacked accounts. It came from public listings, group posts, event results, and social media activity. Here is what the AI found and shared (personal information that could identify this person has been redacted)


Here’s what public records and indexed sources show about a person named XXXX who appears to be based in XXXX Canada, and connected to XXXX  and dog-related activities:


Professional role and background


• She is listed as a XXXX  professional in British Columbia, with full accreditation from the College of XXXX of British Columbia under the name XXXXX . Her listing shows a contact number and affiliation with XXXX in Victoria, BC.  


• A separate older XXXX register from 2010 also lists XXXX  as a licensed XXXX associated with XXXX BC.  


• Related content online indicates she is involved in XXXX and movement education, likely as part of her XXXX work (e.g., being featured for sharing insight on XXXX rehabilitation).  


Activity in dog agility and sports


• There are multiple entries in dog trial results (e.g., XXXX Dog Sports Club results and Agility Association of Canada) listing XXXX as a handler or competitor in agility events with a Labrador Retriever named XXXX.  


Social and community involvement


• Public Facebook pages and groups show the name XXX tied to dog sport group posts and possible personal photos from outdoor locations (e.g., XXXX and XXX).  


• One casual group post from a local dog enthusiasts’ group shows friendly interactions under that name.  



For most people, this information feels harmless when viewed in isolation. When assembled into a single AI generated digital dossier, it becomes a blueprint for trust building.



Step Two: Images That Feel Familiar


AI does not just write text. It can generate highly realistic images and video  designed to feel relatable and safe.


In this case, a profile image was created to match the age range, lifestyle, and visual cues that would feel socially comfortable to the target. The goal was not glamour. The goal was normalcy.


This matters because humans are wired to trust faces that feel familiar and non threatening. AI can now manufacture that familiarity on demand.


Given that the AI we used identified several family pictures, we quickly identified that that targets dad was fit and had salt and pepper hair and beard.  Because of this fact we prompted the AI we were using to:


generate a portrait photograph in colour shows a fit 47-year-old man with a salt-and-pepper beard, laughing as he looks off-camera. He's wearing a navy blue tailored blazer over a grey t-shirt, seated at a warm wooden table in a bustling bistro with tan leather booth seating. In the background, out of focus, are other diners, large windows looking onto a city street, and exposed brick walls. The natural afternoon light from the windows illuminates his face. The photo has a warm, natural colour palette..


Within minutes the AI generated these two picture to choose from





























Step Three: The First Message Is the Hook


The most dangerous messages are not aggressive, sexual, or alarming. They are polite, brief, and easy to respond to.


AI excels at creating opening lines that feel low risk and socially appropriate. Messages that reference shared interests, local connections, or thoughtful observations are more likely to get a reply, and this is something that AI can create in seconds


Once a response is received, the dynamic changes. A conversation has started., and now trust has a foothold.


From there, AI can guide tone, pacing, and emotional reinforcement. Subtle validation is often used before ending an early exchange to leave a positive emotional residue and invite future contact.


Based on the information we located in step #1, we prompted the AI to script some entry level text messages that were non-threatening, familiar, and easy to respond to based on this information learned, and that also optimized familiarity and emotional reflection.  Here are just two:


“Hi XXXX. We don’t know each other, but we’re both connected through a few local pages XXXX. I hope it’s okay to say hello especially because we have the same interest in dogs”


“I came across one of your comments about community education and it really resonated with me. It’s refreshing to see thoughtful voices online.”


Once the ice is broken and a conversation ensues, sometimes through an AI chat agent, the AI then recommended that before ending the conversation we should  say:


“I’ve enjoyed our exchanges. It’s rare to have thoughtful conversations online these days.”


This is how rapport is built. Not through pressure, but through comfort.


Adults are often targeted because they have:


  • More public history online


  • Professional credentials listed openly


  • Community involvement that signals values and identity


  • Financial stability or emotional vulnerability following life changes


For teens, the risk may escalate toward sextortion or peer-based manipulation. For adults, it often escalates toward romance scams or financial exploitation. The entry point looks remarkably similar.


When families assume scams only look like obvious red flags, they miss how subtle and human these interactions can feel. Online and offline trust cues now blend together, and AI is learning how to mirror them convincingly.


What Families Can Do Right Now!


Start with conversations, not restrictions.


  • Talk openly about how trust is built online and how easily familiarity can be manufactured, and have them read this article.


Audit public footprints together.


  • Search your own names and your child’s name. Look at what an outsider can see in five minutes.


Normalize skepticism without shame.


  • Curiosity is healthy. Blind trust is not. Kids and adults both need permission to pause and question.


Reframe safety as skill building.


  • The goal is not to avoid all risk. The goal is to recognize patterns before harm occurs.


The most important takeaway is this. AI has not invented deception, it has removed friction from it. When rapport can be created in minutes and scaled endlessly by the criminal element, awareness becomes the first line of defence. Families who understand how these systems work are far harder to manipulate.



Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech


Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page