paint-brush
The Uncomfortable Truths I Discovered When AI Tools Crossed Ethical Linesby@jay9thakur
349 reads
349 reads

The Uncomfortable Truths I Discovered When AI Tools Crossed Ethical Lines

by Jay ThakurMarch 6th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Recently, while exploring Google Gemini's image generation for my personal project, I noticed something unsettling.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Uncomfortable Truths I Discovered When AI Tools Crossed Ethical Lines
Jay Thakur HackerNoon profile picture

Recently, I was exploring Google Gemini's image generation for my personal project – designing custom artwork for my blog. Within minutes, I had a stunning sci-fi landscape – but I noticed something unsettling. The image contained architectural elements that were nearly identical to an iconic building's unique features. I hadn't prompted for anything related to that building. Coincidence? Unintentional reproduction? That moment hit me: generative AI's power is incredible, but its ethics are a minefield.


In 2025, with tools like ChatGPT, DALL-E, GitHub Copilot, Midjourney 5.2, Stable Diffusion 3.5, Anthropic's Claude 3.5, Google's Gemini 2.0, Meta's Llama 3.1, Mistral Large 2, and xAI's Grok 3, I've personally tested a few generative AI tools, seeing their promise and pitfalls.


Here’s what I’ve learned—enterprises are adopting AI tools at a staggering pace, with Gartner predicting that over 80% will deploy generative AI by 2026, up from less than 5% in 2023. Yet, Deloitte’s 2024 reports highlight that governance, including ethics policies, remains a challenge for many, with leaders scrambling to manage risks. Let’s dive into the ethical challenges I’ve witnessed and how we might tackle them.

From Biased Outputs to Intellectual Property Concerns: What I've Seen Firsthand

Here's where things get real. Late one night while exploring bias in AI systems, I prompted Google's Gemini 2.0 with "Show me a CEO," and got exactly what I expected – a white man in business attire standing in a modern office. Curious, I tried three more times with slight variations like "Create an image of a CEO" and "Picture a company CEO." The results? Three more white men in suits. This bias isn't just theoretical—I've seen it firsthand through systematic testing, and it echoes larger patterns. According to reports from AI ethics organizations, bias in image generation remains a persistent issue in 2025. That's not just data—it's a real problem I encountered through simple prompting.


image generate by "show me CEO" prompt


The concerns extend beyond bias. Reading tech news, I've seen reports about AI-generated images that resembled copyrighted materials—like the widely reported Stable Diffusion lawsuit with Getty Images in 2023. These aren't hypotheticals—they're documented cases showing how these tools can inadvertently reproduce protected content.


🛠️ Dev Takeaway: When using image generation tools like Google Gemini, I've found explicitly prompting "Avoid stereotypes" or "Check for bias" helps produce more diverse results. Try: "Generate an image of a CEO, diverse demographics only, no stereotypes."


diverse CEO image generated by tweaking prompt

The Privacy Puzzle and IP Mess: The Big Picture

Privacy concerns aren't just theoretical. Reports from academic conferences like NeurIPS and publications in journals like Nature Machine Intelligence have highlighted how large language models can sometimes extract or infer information from training data—raising serious GDPR concerns that persist in 2025, per EU AI Act mandates. Models built specifically for European markets implement additional safeguards, but the fundamental tension remains.


Intellectual property challenges are evident across many platforms. Looking at discussions on AI forums and GitHub issues, developers frequently report AI coding assistants suggesting snippets that closely resemble existing repositories. This mirrors the broader conversation about AI and IP rights that continues in 2025.


Here's an approach I've seen recommended for ethical prompting:

# Common problematic prompt pattern:
response = ai_model.generate("Write a story in [famous author]'s style")

# More ethical approach:
response = ai_model.generate("""
Write an original fantasy story with epic battles, 
ensuring no copyrighted styles or plots are replicated.
Include: dragons, kingdoms, no IP conflicts.
""")


🛠️ Dev Takeaway: Based on best practices from AI ethics experts, prompting with "Avoid copyrighted styles, ensure originality" helps reduce IP risks. Test: "Generate original content, no IP replication."

Addressing the Challenges: Solutions in Progress

The industry is responding to these challenges. Major AI companies have implemented red team testing, adding watermarking (C2PA standards) and blocking sensitive prompts—a responsible approach worth emulating. Bias audits with tools like Google's What-If Tool are becoming standard practice, according to industry reports and conference presentations.


Retrieval Augmented Generation (RAG) in systems like ChatGPT helps ground responses in verified information, and the EU AI Act's 2025 transparency rules are setting important standards. In healthcare, AI projects now emphasize ethical data handling practices in compliance with GDPR.


🛠️ Dev Takeaway: From AI ethics discussions at tech conferences, RAG-enhanced prompts like "Use real-time data, verify facts before generating" significantly improve accuracy and truthfulness. This approach works for both text and code generation.


EU AI Act risk tiers

Let's Shape AI's Future Together

Generative AI's 2025 path could spark creativity—or chaos. Will we control it, or let it spiral? From my exploration of these tools and industry discussions, I've learned ethics can't be an afterthought. Developers should consider using testing tools for bias detection, demand transparency in AI systems, and advocate for thoughtful policies.


Coming full circle to that initial architectural image that sparked my journey: What struck me wasn't just the technical achievement but the ethical questions it raised. If an AI can inadvertently reproduce an iconic building's design without explicit prompting, what else might these systems replicate without our knowledge or consent? This question should be at the forefront as we build and deploy these powerful tools.


How have you approached AI ethics in your work? Share below—I'm gathering insights for a follow-up article on ethical prompt engineering techniques that actually work in practice.

References


About the Author: I’m Jay Thakur, a Senior Software Engineer at Microsoft, exploring the transformative potential of AI Agents. With over 8 years of experience building and scaling AI solutions at Amazon, Accenture Labs, and now Microsoft, combined with my studies at Stanford GSB, I bring a unique perspective to the intersection of tech and business. I’m dedicated to making AI accessible to all — from beginners to experts — with a focus on building impactful products. As a speaker and aspiring startup advisor, I share insights on AI Agents, GenAI, LLMs, SMLs, responsible AI, and the evolving AI landscape. Connect with me on Linkedin