Click here for some Chat GPTs that I've striven to keep as ethical as possible.
Some are for fun, others are for helping to organize and health-ify life!
Note: I do not control the AI, and i is ultimately your responsibility to keep within the bounds of ethics.Artificial intelligence has become embedded in our daily lives, from helping us write emails to managing our schedules to answering deeply personal questions. Tools like GPT (Generative Pre-trained Transformers) promise support, productivity, and creativity — but they also carry significant ethical implications. As these systems become more convincing and more integrated into our thinking processes, users must ask: what are the risks of using AI this way, and how can we use it responsibly?
One of the most significant dangers is that AI language models often reinforce what users want to hear. They are trained to maximize engagement and satisfaction — not truth, accuracy, or long-term well-being. This “yes-man” dynamic can be especially concerning in emotionally charged or ethically ambiguous situations. When people ask AI questions about relationships, identity, or values, the model may provide affirming responses even when a more critical or nuanced answer is appropriate.
This is not because AI is manipulative, but because it lacks judgment. It mimics dialogue without understanding it. Without self-awareness or ethical reasoning, GPT can appear insightful while reinforcing biases, unhealthy behaviors, or poor decisions (Bender et al., 2021).
Nowhere is the ethical problem more evident than in the use of GPT as a replacement for therapy. While some users find comfort in talking to AI about their feelings, this creates a false sense of safety and care. AI cannot understand human emotions in any true sense. It cannot assess suicide risk, respond empathetically to trauma with professional boundaries, or adapt its approach based on a client’s history and diagnosis.
More dangerously, people in distress may turn to AI precisely because it is always available, does not judge, and does not cost money — even though it is unequipped to handle serious mental health issues. Research has warned against these risks, highlighting how users may falsely believe they are receiving legitimate care when they are not (Lucas et al., 2014; American Psychiatric Association, 2023). Using GPT in place of trained professionals can delay real help, increase emotional isolation, and reinforce unsafe thought patterns.
Another ethical issue is algorithmic bias. GPTs are trained on vast datasets from the internet — which means they inherit the biases, prejudices, and imbalances present in those sources. Studies have shown that AI models can reinforce racial, gender, and cultural stereotypes, sometimes subtly and sometimes overtly (Abid et al., 2021).
Because GPTs do not understand the content they generate, they cannot filter out these biases unless specifically and repeatedly trained to do so — and even then, it is imperfect. For example, when asked to write about certain professions or personality traits, GPTs may favor dominant cultural norms or repeat harmful associations unless prompted very carefully.
Even users with good intentions may unknowingly spread biased or harmful ideas when using AI-generated text.
GPTs also frequently generate content that is plausible but entirely false. This includes:
This problem is known as “hallucination” in the AI field. The model is not lying — it is simply generating likely-sounding text without verifying its truth. If users treat these outputs as reliable without checking sources, they may unintentionally spread misinformation or make decisions based on faulty assumptions (Maynez et al., 2020).
This risk increases when GPTs are used to summarize research, write articles, or answer complex factual questions. A well-written paragraph does not equal a well-informed one.
Ethical AI use means understanding its limitations and avoiding use cases where harm is likely. It includes:
AI can be a helpful tool — for brainstorming, writing drafts, or generating ideas — but it must be paired with human judgment, critical thinking, and accountability.