top of page
Search

Mental-Health-Support Chatbot [NLP]

Updated: May 12, 2022

INTRODUCTION

In the section of mental illness or mental health crisis NLP is growing large for support-based AI chatbots. Chatbots like - Wysa, Woebot, Youper created this support system to make people heard. This mental support chatbots are tremendously helpful, but these chatbots are tend to feel scripted or may feel like demoralizing.

Training these chatbots with reddit data or social media data may cause a huge mistake, as the dark side of human nature may come to the training and produce the worst result. Woebot

The Woebot is perfect for someone who feels like there’s no one to talk and trust or just share thoughts without being judged. However, this app’s approach is not perfect by any means, this app sometimes feels like extremely scripted and may lead to demotivation or frustration. The outcomes are not expressive, they are somehow more like scripted. The model may be performing more with conditional logic rather than the user’s expression.


There could be a spreadsheet containing questions and answers that follows the conditional logic and returns the value.

This is okay for some regular person without illness, who’s just lonely but for patients it may lead to frustration.

Generally large end-to-end neural networks can replace this complicated manually designed rules. This approach solves the script-based text issues but comes with a cost. Using this approach, the model may run out of human control. The huge amount of data in the internet today with light and dark nature of human can cause a huge mistake, may become a major safety concern in Health-Care.

DialoGPT, one of the state-of-the-art open-sourced conversational AI models, was trained by Microsoft on 147M comment chains from Reddit. Though some models are not still allowed to be published for safety concerns. This model earlier produced some repetitive results and irrelevant results. But after switching to sampled decoding the output turned more relevant but also more harmful. Fine-tuning Dialo-GPT on data produced by therapists: The quality or usability of the neural network depends on the quality of data it is trained on. For example, if the neural network is trained on reddit data, then it might be pretty flawless but more harmful some could even imagine. Training on a cleaner, fine-tuned data may correct some of the negative behavior and result fine. Either way the Wysa application handles the chat pretty well and in a professional manner. This application may impact in a good way. But sometimes even training on fine-tuned data produced by therapists it returns some harmful results. Impacts: The safety for this mental- health chatbot should be the utmost priority. The cleaner and more precise data we can get the chatbot would perform more efficiently. The tokenizer should be tuned pretty well to understand and tokenize well. Next we talk about the various NLP techniques and vectorization modules. IN DEPTH NLP


Introduction: Computers interact with humans with programming languages which are typically structured for it. But as we come to NLP or Natural Language Processing, it is way more difficult to interact with computers. Human language has a lot of ambiguity, like there are multiple words with same meaning (synonyms) and same word for different meaning (polysemy). A sentence can have different meaning based on each word and situation. This is why NLP is the difficult part in AI. Use case:


NLP usage is booming right now. Everyone uses the NLP technology in their daily life without even noticing. NLP is mainly used in:

- Spell Checker - Parsing synonyms and antonyms - Translations [Google Translator] - Virtual Assistance [Siri, Alexa, Google Assistant] - Chatbot - Search Engines [Google] Representing Words: Computers don’t understand our human language so there should be a way to make it understand. One way could be creating unique label for each word in the world, which is pretty much inefficient and not worthy enough, because there might be synonyms. Another approach is to one hot encoding or vectorizing the words. One hot coding will be inefficient too. Similarity Based Representation: Let’s say we have a word “bank”. This word has basically two meaning in daily life, one the financial or money related and another one is the land near a water side. Now if the word “bank” occurs in a sentence that has a word money in it, we could guess that the word “bank” is not about land it is about money or financial institution. Either way if some words like river, land, water comes along with the word “bank” this would be probably about land. Word2Vec Word2Vec us a technique in computer science specially in NLP field where we perform mathematical operations with words. In this module or technique each word gets some vectors or attributes. King - Horse- - Gender - 1 → Gender -1


- Rich - 1 → Rich - 0 - Have tail - 0 → Have tail - 1 - Authority - 1 → Authority - 0 This is how each word can have many vectors or attributes that define the object. Now there is a technique called CBOW or Continues bag of words. In this technique we have a target and a context in which basis the target will be determined. One other way is Skip gram, in this method the CBOW is reversed. Here it will take the target and predict the concept for it. This is how we implement word2vec model -

But there is a con in word2vec. As this vectorizes each word the synonyms might get same vectors. Like - - He didn’t receive FAIR treatment. - Fun FAIR in New York city this summer. Here the word2vec will create a fixed vector for the word fair. But the meaning of the word is different in both the sentences. Conclusion In any NLP task vectorizing and pre-processing the data is the main challenge, otherwise it may result to wrong outputs. There is a more precise and more realistic way to vectorize words is BERT.


This model can generate contextualized embeddings. It vectorizes the words based on the meaning of the sentence.



28 views0 comments

Recent Posts

See All

コメント


bottom of page