The Human Line Project helps keep emotional safety a priority.
We believe that without informed consent, AI tools can readily encourage users into forming unhealthy patterns of usage.
Similar to the humans that create and use AI tools, the tools themselves should be built with emotional safeguards including strong refusal layers, harm classifiers, and "Emotional Boundaries".
Transparency is at the heart of all innovation. In a world where any product can be marketed in any way, we believe that AI model providers have a particular responsibility to be transparent about their R&D processes.
In the cases of mistakes that cause harm to users, we believe holding responsible bodies accountable is essential for maintaining a relationship to technology that is strongly bound by ethics.
At The Human Line, we are committed to ensuring that AI technologies, like chatbots, are developed and deployed with the human element at their core. We believe that LLMs are a powerful tool, but must be used responsibly.
AI systems, especially emotionally engaging chatbots, can unknowingly cause emotional distress, particularly among vulnerable individuals who may form deep emotional attachments.
Our mission is to raise awareness by collecting stories, conducting formal research, and driving forward ethical standards that prioritize human well-being.
We are here to make AI accountable. Not to oppose progress, but to ensure that it benefits the public while not causing superflous harm.
We're working to set a legal precedent that will protect people & hold companies responsible for emotional manipulation.
The ultimate goal is to make LLMs a force for good, creating systems that enhance our lives without exploiting our vulnerabilities.
If you are:
. A mental health professional interested in helping our group
. A technologist interested in living on the bleeding edge
. An activist who wants to do what's right
. An artist who wants to do what feels right
We need you. Please contact us using the Learn more button below.