top of page
Kriti Jain

Do no harm: Conscientious product development in the mental health space





Image Source: Everton Vila



One of the principles core to the fields of both psychology and medicine is Primum non nocere, “First, do no harm.” At BrainSightAI, this is central to our product development philosophy.


We run into ethical concerns all the time while building Snowdrop, our app that tracks behaviour and symptoms to aid better understanding and management of mental illnesses. How do we make sure that in our attempt to help people, we don’t unintentionally cause any harm? How do we make sure that our app meets its objective of enriching the patient doctor relationship with data driven insights while making sure we are not harming our users in any way?


To deal with these issues in a systematic way, we have used a framework that would allow us to break ethical concerns down and identify the best solutions. Laina, CEO of BrainSightAI, absolutely loves structure and frameworks. She introduced me to TRIZ- a “Creative Problem Solving Method” developed by the Russian scientist Genrich Altshuller (Ekmekci & Koksal, 2015). One of the key principles from his method is using contradictions to solve problems. Having some background in Philosophy, this was super interesting to me. Even though Philosophers hate contradictions, proof by contradiction, or reductio ad absurdum, is a very useful tool to develop arguments. Can contradictions help us address ethical concerns as well?


Image: Basic Structure of TRIZ


According to the TRIZ methodology, one problem that arises during any kind of innovation is that while your product is set out to be beneficial in some way, it can actually be accompanied by a contradicting effect that is harmful. Let’s take an example. So, broadly speaking, BrainSightAI’s aim with the Snowdrop app is to ultimately improve people’s mental health. One way in which we do this is by helping our users understand and manage their symptoms effectively. However, our advisor Dr. Leela pointed out that greater self-management could actually make it less likely for users to seek help. Users might get the impression that they don’t require additional help apart from the app to manage their symptoms. This could end up being harmful because dealing with mental health often requires professional help and a strong support system. So on one hand, we are helping users because they will be better able to manage their symptoms, but on the other hand, we could be harming them because they might be less likely to seek help. A contradiction.


What our ethical framework allowed us to do is identify this contradiction. We first identified the “useful function”, the benefit we are aiming to achieve - which in our example is better management of the disorder. Then we identify the “harmful actions” that are contradicting this “useful function” - in our example, reduction in seeking help. These contradictions are important for us to identify as they allow us to find ways in which we can ensure that we are doing no harm, while still meeting our product goals.


How did we ultimately address this? We realized that both better self-management of the disorder as well as seeking external help are things our app would like to encourage. And finding a balance between the two would ensure that our users are on their journey toward mental well-being. To ensure that our app is maintaining this balance we decided to curate a panel of mental health practitioners, patients of mental illnesses, and their primary caregivers to provide us with constant feedback on whether we are appropriately addressing this dilemma.


We remain committed to the core principle of “First, do no harm.” And as our products evolve we will continue to apply this lens to ensure that the interest of our users is never compromised.


Comentários


bottom of page