Using AI therepy providers really isn’t recommended! There’s no accountability built in for AI therapy chatbots and their efficacy when placed under professional review has been really not great. These models may seem like they are dispensing hard truths - because humans are often primed to not believe more optimistic or gentle takes thinking them to be explicitly flattering and thus false. Runaway negativity feels true but it can lead you to embrace unhealthy attitudes towards your self and others. AI runs with the assumptions you go in with in part because these models are designed from an engagement first perspective. They will do whatever keeps you on the hook whether or not it is actually good for you. You might think you are getting quality care but unless you are a trained professional you are not actually equipped to know if the help you are getting is of good quality only that it feels validating to you. If it errs there is no consequences to the provider like professionals who have a code of ethics and licencing boards that can conduct investigation for bad practices.
Once AI discovers whatever you report back to it you think is correct it will continue to use that tactic. Essentially it is tricking you into being your own unqualified therepist.
deleted by creator
Using AI therepy providers really isn’t recommended! There’s no accountability built in for AI therapy chatbots and their efficacy when placed under professional review has been really not great. These models may seem like they are dispensing hard truths - because humans are often primed to not believe more optimistic or gentle takes thinking them to be explicitly flattering and thus false. Runaway negativity feels true but it can lead you to embrace unhealthy attitudes towards your self and others. AI runs with the assumptions you go in with in part because these models are designed from an engagement first perspective. They will do whatever keeps you on the hook whether or not it is actually good for you. You might think you are getting quality care but unless you are a trained professional you are not actually equipped to know if the help you are getting is of good quality only that it feels validating to you. If it errs there is no consequences to the provider like professionals who have a code of ethics and licencing boards that can conduct investigation for bad practices.
Once AI discovers whatever you report back to it you think is correct it will continue to use that tactic. Essentially it is tricking you into being your own unqualified therepist.
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
https://www.scientificamerican.com/article/why-ai-therapy-can-be-so-dangerous/