Research & Awareness Design

Year :

2025

Industry :

Digital Safety / AI Ethics

Client :

Self-Initiated Research

Project Duration :

3 months

a white robot with blue eyes and a laptop
a white robot with blue eyes and a laptop
a white robot with blue eyes and a laptop

Why This Project Matters

Today’s job market is being reshaped by generative AI — not just for good, but also for exploitation. Scammers now use advanced AI models to craft job postings that look polished, credible, and dangerously convincing. Job seekers lost $750.6M in 2024, and the traditional signs of “scam language” are no longer reliable.

This project explores a critical question:
Can machine-learning models still detect scams when the scammer is using AI too?

Our goal wasn’t only to analyze the data — it was to understand how AI changes digital trust and to translate these findings into simple, accessible resources for students and young professionals.

How I Investigated AI-Written Scams

Working with 17,880 real and fake job postings, we re-generated every fake post using an LLM to create modern, convincing scam examples.
This allowed us to test models against realistic, AI-enhanced deception instead of outdated scam language.

What I built and tested:

  • AI-expanded & refined dataset

  • Clean preprocessing & reproducible pipeline

  • NLP analysis (frequency, POS tagging, readability, topics)

  • ML classifiers including XGBoost

  • Visual comparisons of real vs. fake vs. AI-generated job posts

And to make the research accessible:

  • A narrated explainer podcast

  • A comic book for quick learning

  • A website organizing all findings

This combined technical accuracy with human-centered communication.
Link to GitHub Repository

Research & Awareness Design

Year :

2025

Industry :

Digital Safety / AI Ethics

Client :

Self-Initiated Research

Project Duration :

3 months

a white robot with blue eyes and a laptop
a white robot with blue eyes and a laptop
a white robot with blue eyes and a laptop

Why This Project Matters

Today’s job market is being reshaped by generative AI — not just for good, but also for exploitation. Scammers now use advanced AI models to craft job postings that look polished, credible, and dangerously convincing. Job seekers lost $750.6M in 2024, and the traditional signs of “scam language” are no longer reliable.

This project explores a critical question:
Can machine-learning models still detect scams when the scammer is using AI too?

Our goal wasn’t only to analyze the data — it was to understand how AI changes digital trust and to translate these findings into simple, accessible resources for students and young professionals.

How I Investigated AI-Written Scams

Working with 17,880 real and fake job postings, we re-generated every fake post using an LLM to create modern, convincing scam examples.
This allowed us to test models against realistic, AI-enhanced deception instead of outdated scam language.

What I built and tested:

  • AI-expanded & refined dataset

  • Clean preprocessing & reproducible pipeline

  • NLP analysis (frequency, POS tagging, readability, topics)

  • ML classifiers including XGBoost

  • Visual comparisons of real vs. fake vs. AI-generated job posts

And to make the research accessible:

  • A narrated explainer podcast

  • A comic book for quick learning

  • A website organizing all findings

This combined technical accuracy with human-centered communication.
Link to GitHub Repository

Research & Awareness Design

Year :

2025

Industry :

Digital Safety / AI Ethics

Client :

Self-Initiated Research

Project Duration :

3 months

a white robot with blue eyes and a laptop
a white robot with blue eyes and a laptop
a white robot with blue eyes and a laptop

Why This Project Matters

Today’s job market is being reshaped by generative AI — not just for good, but also for exploitation. Scammers now use advanced AI models to craft job postings that look polished, credible, and dangerously convincing. Job seekers lost $750.6M in 2024, and the traditional signs of “scam language” are no longer reliable.

This project explores a critical question:
Can machine-learning models still detect scams when the scammer is using AI too?

Our goal wasn’t only to analyze the data — it was to understand how AI changes digital trust and to translate these findings into simple, accessible resources for students and young professionals.

How I Investigated AI-Written Scams

Working with 17,880 real and fake job postings, we re-generated every fake post using an LLM to create modern, convincing scam examples.
This allowed us to test models against realistic, AI-enhanced deception instead of outdated scam language.

What I built and tested:

  • AI-expanded & refined dataset

  • Clean preprocessing & reproducible pipeline

  • NLP analysis (frequency, POS tagging, readability, topics)

  • ML classifiers including XGBoost

  • Visual comparisons of real vs. fake vs. AI-generated job posts

And to make the research accessible:

  • A narrated explainer podcast

  • A comic book for quick learning

  • A website organizing all findings

This combined technical accuracy with human-centered communication.
Link to GitHub Repository

Create a free website with Framer, the website builder loved by startups, designers and agencies.