last posts

Company using ChatGPT for mental health support raises ethical issues

techsm5

  • A digital mental health company is chafing for using GPT-3 technology without letting users know.
  • Koko co-founder Robert Morris told Insider the experiment was “exempt” from informed consent law due to the nature of the test.
  • Some medical and tech professionals said they felt the experiment was unethical.

As ChatGPT use cases grow, one company is using artificial intelligence to experiment with digital mental health care, exposing ethical gray areas around the use of technology.

Rob Morris — co-founder of Koko, a free, nonprofit mental health service that partners with online communities to find and treat those at risk — wrote in a Twitter feed Friday that his company used GPT-3 chatbots to help develop responses for 4,000 users.

Morris said in the thread that the company had tested a “co-pilot approach with humans overseeing the AI ​​as needed” in messages sent through Koko’s Peer Support, a platform he described in a video. as “a place where you can get help from our network or help someone else.”

“We make it easier to help others and with GPT-3 we make it even easier to be more efficient and effective as a help provider,” Morris said in the video.

ChatGPT is a variant of GPT-3, which creates human-like text based on prompts, both created by OpenAI.

Koko users were not initially told that the replies had been developed by a bot, and “once people learned that the posts had been co-created by a machine, it didn’t work”, wrote Morris on Friday.

“Simulated empathy feels weird, empty. The machines haven’t had a human experience, so when they say ‘this seems difficult’ or ‘I understand’, it sounds inauthentic,” Morris wrote in the thread. “A chatbot response generated in 3 seconds, no matter how elegant, somehow feels cheap.”

However, on Saturday Morris tweeted “some important clarifications.”

“We didn’t pair people up to chat with GPT-3, unbeknownst to them. (In retrospect, I could have phrased my first tweet to better reflect that),” the tweet read.

“This feature was opt-in. Everyone knew about the feature when it was live for a few days.”

Morris said on Friday that Koko “pulled that off our platform pretty quickly.” He noted that AI-based posts were “rated significantly higher than those written by humans themselves” and response times dropped by 50% thanks to the technology.

Ethical and legal concerns

The experiment caused an uproar on Twitter, with some public health and tech professionals calling out the company for alleged violations. the Informed Consent Act, a federal policy that requires human subjects to provide consent before participating in research.

“It’s deeply unethical,” said media strategist and author Eric Seufert tweeted Saturday.

“Wow, I wouldn’t admit it publicly,” said Christian Hesketh, who describes himself on Twitter as a clinician-scientist. tweeted Friday. “Participants should have given informed consent and this should have gone through an IRB [institutional review board].”

In a statement to Insider on Saturday, Morris said the company “doesn’t pair people up to chat with GPT-3” and said the option to use the technology was removed after realizing it “looks like an inauthentic experience”.

“Instead, we were giving our peer supporters the option to use GPT-3 to help them compose better responses,” he said. “They were getting suggestions to help them write more favorable responses faster.”

Morris told Insider that Koko’s study was “exempt” from informed consent law and cited previous research published by the company that was also exempt.

“Each individual must provide consent to use the service,” Morris said. “If this was an academic study (which it isn’t, it was just an explored product feature), it would fall under an ‘exempt’ category of research.”

He continued: “It imposed no additional risk on users, no deception, and we do not collect any personally identifiable information or personal health information (no email, phone number, IP address , username, etc.).”

A woman searches for mental health support on her phone.

Beatriz Vera/EyeEm/Getty Images



ChatGPT and the gray area of ​​mental health

Still, the experiment raises questions about the ethics and gray areas surrounding the use of AI chatbots in healthcare in general, having previously caused turmoil in academia.

Arthur Caplan, professor of bioethics at New York University’s Grossman School of Medicine, wrote in an email to Insider that using AI technology without informing users is “extremely contrary to ethics”.

“The ChatGPT intervention is not the standard of care,” Caplan told Insider. “No psychiatric or psychological group has verified its effectiveness or exposed the potential risks.”

He added that people with mental illness “require special sensitivity in any experience,” including “thorough review by a research ethics board or institutional review board before, during, and after intervention. “.

Caplan said using GPT-3 technology in this way could have a broader impact on its future in healthcare.

“ChatGPT may have a future, as may many AI programs such as robotic surgery,” he said. “But what happened here can only delay and complicate that future.”

Morris told Insider that his intention was to “highlight the importance of humans in the discussion between humans and AI.”

“I hope it doesn’t get lost here,” he said.


techsm5

Comments



Font Size
+
16
-
lines height
+
2
-