With long NHS waiting lists for talking therapies, many of those living with mental health problems are turning to apps as an alternative solution, but could they be exacerbating the problem?
Over the last two years mental health apps have seen a huge surge in popularity, according to the mental health charity Mind. Inevitably the increasing use of smartphones has led people to look for support in new ways, with many now turning to apps.
Compared to paying for a weekly session with a private therapist, the main advantage of a mental health app is that there’s often no time or financial commitment. For those living in remote, rural areas, accessing face-to-face therapy can be particularly challenging and many still feel a stigma in seeking professional help.
The positives are real: “An app can reduce the time people are left without help, becoming an early support system or safety net, whilst waiting for face-to-face care,” says Simon Leigh, Health Economist for the health app finder ORCHA. “Later after a therapist has been seen, apps can reinforce strategies and track information.”
But first of all, what do we mean by mental health apps? They broadly fall into three categories:
“Mindfulness and meditation apps can potentially be a great learning tool,” says Hilda Burke, psychotherapist and author of The Phone Addiction Workbook. “The ideal scenario is that you learn the technique, then use it off-line so you’re not dependent on your phone. However, many of these apps are built to be addictive, so there is a risk of becoming dependent on the very apps that were supposed to help you.”
She highlights the irony that our increased use of smartphones has been found in studies to be partly to blame for the rise in mental health problems, yet we are steering people back to their phones to try and solve the issue. Numerous studies have shown how bad it is for us to be on our phones just before bed and the moment we wake up, yet many of the meditation and mindfulness apps promote sleep meditations and morning wake-up meditations.
The other concern with mood-screening apps is what advice you’re given on receiving your result. So your app concludes you’re depressed, now what? When being diagnosed by a professional rather than an app, these questionnaires are conducted in person by a medical professional, who would then devise a suitable treatment plan, possibly combining medication and therapy. They are also able to pick up the hidden messages in your body language which may give even bigger clues as to how you’re feeling. Without that immediate follow-up plan, the app user could be left feeling even more anxious and alone.
Silja Litvin, psychologist and founder of eQuoo, an emotional fitness game, which aims to boost mental health
points out that those suffering from depression or anxiety often have issues with cognitive impairment, motivation and drive, so once you’ve got your ‘result’ you’re even less likely to start reading up on what to do next than someone who’s mentally healthy. So being able to get users who are having mental health issues to stick to an app can be tricky. Her app is unusual in that it aims to boost the user’s mental health during play. At the moment it’s aimed at preventing mental health problems from arising, but in the future, she plans to add therapeutic features to the game so it can also help with treating problems.
Some treatment apps do offer a remote connection to real life therapists and studies have shown that some of these have comparable outcomes to face-to-face therapy. NICE is currently evaluating these apps and gathering real-world evidence to measure their effectiveness.
Burke points out the potential problem of there being no ‘time boundaries’ with apps which could lead to inappropriate use – for instance, users could get caught in the trap of lying awake worrying, playing with their treatment app, when in fact they would be better off trying to sleep. The phone offers up so many other things – chat, dating matches, news – so the risk is that what starts off as a desire for relaxation and calm can become a gateway to further stress and anxiety, thus making sleep even more elusive.
The lack of validation is a huge issue too, says Simon Leigh of ORCHA. “A review published in 2014 by the Karolinska Institute showed that some apps developed for the purpose of suicide prevention included some potentially harmful content, such as featuring descriptions of suicide and could in fact be used as a source of ideas by highly vulnerable people. This wasn’t the apps’ intended use, but because they hadn’t been adequately tested, this significant safety issue hadn’t been addressed.”
NICE’s evaluations of therapist-guided CBT apps (where users also have some digital or real life contact with a named therapist) includes full reviews from clinical psychologists to ensure they follow CBT therapy standards and that the app content is safe.
The experts we spoke to agree it’s essential that these apps are developed with input from both mental health professionals and sufferers themselves, and for them to be rigorously tested to see if they make a difference. For instance, the Happy Not Perfect App was three years in the making and was developed with the help of neuroscientists and psychologists. It includes ‘recharge’ sessions filled with mini-games, quick techniques and breathing exercises, a ‘burn bin’ to rid your mind of unhelpful thoughts and a daily gratitude list.
Similarly, the Zone app, founded by Saira Gill, was developed in partnership with neuroscientists and music therapists to make sure the content had the right elements, proven to boost mental health and peak performace. “For instance, we decided to create our own music from scratch so that it didn’t evoke any negative memories in the user and would therefore be ‘neutral’ “ whilst being tailored to each user
“Typically it takes a month of regular usage of a good quality app to see a difference”, Litvin points out. But even the best quality, officially approved apps should be used alongside real life, professional support, not instead of, advises Eva Critchley, Head of Digital at Mind.
“We want everyone who needs mental health support to be offered access to a range of high-quality treatments (which could include digital help) so that they can get the support that’s right for them when they need it.”
To make sure you’re getting the best out of an app, keep evaluating whether and how the app is helping you. Ideally only use apps found through portals such as the NHS Apps Library at or the health apps finder ORCHA which has a strict vetting process. Do make sure you’ve checked your online therapist is accredited by an official body such as the British Association for Counselling and Psychotherapy (BACP).
This piece by Serena Oppenheim was first seen on ‘Forbes’, 16 January 2019.
]]>At the Living in Digital Times press conference at CES 2019, we saw a preview of the trends in lifestyle tech that we will see in 2019, everything from digital health products to children’s technology in the form of toys and educational outcomes.
The lifestyle field has grown considerably as people have gotten more comfortable with technology in their lives, everything from living their lives through smartphone apps to wearables that monitor steps taken and heart rate.
This growth is set to accelerate in 2019 as technology begins to reach into areas of our life that have so far escaped technological solutions, as exemplified by the featured speakers at the news conference.
The first thing that was discussed was the improvement in digital health technology over last year. Alzheimer’s disease, in particular, was given special attention, which isn’t surprising as the costs associated with the disease are expected to soar over the next decade, reaching 1 trillion dollars by 2030.
The focus for several start-ups is on the caregiver, those who will need to monitor and attend to the needs Alzheimer’s patients. The development of VR systems that simulate the experiences of Alzheimer’s sufferers is of particular interest, as its part of a larger trend we’ve seen for a few years now.
The CEO of MobileHelp, Rob Flippo, discussed LifePod, a virtual caregiver that proactively interacts with a user to provide everything from simulated socialization for the infirm to monitoring for a potential health crisis detectable in the voice of the user, a development that Eric White, the CEO of Miku, a so-called BabyTech company, has developed to monitor infants in their cribs. Both promise to alert users to potentially dangerous conditions before the users themselves could even possibly be aware of them.
Technology harnessing the advances in voice recognition is a significant trend for 2019. Including MobileHelp and Miku, several companies are joining together voice recognition and artificial intelligence to provide companionship for the elderly and the young alike, stimulating their brains to either develop their mental capabilities or to arrest their decline.
Voice technologies paired with AI are also going to be introduced as medical and therapeutic assistants for doctors and mental health professionals. Advances in psychology have allowed companies to develop voice recognition techniques that can identify voice biomarkers that indicate specific conditions like depression, so expect these to begin rolling out this year in both your doctor’s office and in various app marketplaces.
After a few years of declining interest, wearables are about to break out onto the scene again in a big way. Industry professionals already expect shipments of wearable technology to double this year, with as many as 245 million wearable devices expected to be sold in 2019.
Fitness technology is also expected to make major advances with products like Sense Arena, which can allow a user to simulate hockey drills to improve their game. Fitbit is launching a major wearable monitor this year that goes beyond the usual step tracker and heart rate monitor that most people are used to. It’s generated a lot of excitement so far, so once we get to see the product ourselves here at CES, expect to hear a lot more about it in the next couple of weeks.
This piece by John Loeffler was first seen on ‘Interesting Engineer‘, 7 January 2019.
]]>It’s the age-old question for every sleep deprived parent: what is the best way to get your baby settled and off to sleep?
From controlled crying, co-sleeping, self-settling to “no crying” sleep strategies, there is a plethora of sleep methodologies, advice and professional help on offer. For parents and caregivers, it can be overwhelming to work out what fits their family best.
During Infant Mental Health Awareness Week (11-15 June), it is a timely reminder of the importance of taking an infant mental health approach to any sleep or settling techniques.
The first three years of a child’s life are a period of incredible growth and development. The earliest relationships with their parents and caregivers not only form the foundations for the child’s on-going development, but can also impact their health and wellbeing.
Relationships let babies express themselves. For example, a cry, a laugh or question is responded to with a cuddle, a smile or an answer. By communicating back and forth we are creating and sharing experiences together, strengthening the bond and helping the baby learn more about the world at the same time.
Science has shown us that babies seek closeness and comfort from their parents. They also cry as a way of communicating with us. A little grizzle might not require intervention but if it escalates to a cry, the message is very clear – the baby needs help.
Tuning in and responding to a child with warmth and gentleness, lays the foundations for a child’s healthy development and helps to shape the adult they will become. It also lays the foundation for a restful sleep. A calm and content child, who understands there is emotional support available to them from their caregivers, should they need it, is more likely to experience better quality sleep than a child who is left to cry and “manage on their own.
Parents and caregivers should be wary of “one-size-fits all” sleep strategies – no two babies are the same, they all have different personalities and temperaments. By understanding the baby’s needs, cues and capabilities you can create an emotional and physical safe sleep space.
Our advice is for parents and caregivers is to choose an approach that that focuses on the infant and family’s mental health and wellbeing – where parents are encouraged to comfort their babies to reach sustainable change in a gentle and loving way. …Read more at https://thesector.com.au/2019/01/03/importance-of-an-infant-mental-health-approach-to-sleep/
This piece by Cindy Davenport was first seen on ‘The Sector‘, 3 January 2019.
]]>In March 2017, Facebook launched an ambitious project to prevent suicide with artificial intelligence.
Following a string of suicides that were live-streamed on the platform, the effort to use an algorithm to detect signs of potential self-harm sought to proactively address a serious problem.
But over a year later, following a wave of privacy scandals that brought Facebook’s data-use into question, the idea of Facebook creating and storing actionable mental health data without user-consent has numerous privacy experts worried about whether Facebook can be trusted to make and store inferences about the most intimate details of our minds.
The algorithm touches nearly every post on Facebook, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of “imminent harm,” according to a Facebook representative.
That data creation process alone raises concern for Natasha Duarte, a policy analyst at the Center for Democracy and Technology.
“I think this should be considered sensitive health information,” she said. “Anyone who is collecting this type of information or who is making these types of inferences about people should be considering it as sensitive health information and treating it really sensitively as such.”
Data protection laws that govern health information in the US currently don’t apply to the data that is created by Facebook’s suicide prevention algorithm, according to Duarte. In the US, information about a person’s health is protected by the Health Insurance Portability and Accountability Act (HIPAA) which mandates specific privacy protections, including encryption and sharing restrictions, when handling health records. But these rules only apply to organisations providing healthcare services such as hospitals and insurance companies.
Companies such as Facebook that are making inferences about a person’s health from non-medical data sources are not subject to the same privacy requirements, and according to Facebook, they know as much and do not classify the information they make as sensitive health information.
Facebook hasn’t been transparent about the privacy protocols surrounding the data around suicide that it creates. A Facebook representative told Business Insider that suicide risk scores that are too low to merit review or escalation are stored for 30 days before being deleted, but Facebook did not respond when asked how long and in what form data about higher suicide risk scores and subsequent interventions are stored.
Facebook would not elaborate on why data was being kept if no escalation was made.
Facebook’s algorithm is meant to be a next step from suicide hotlines, which only screen callers who are actively seeking help.
The risks of storing such sensitive information is high without the proper protection and foresight, according to privacy experts.
The clearest risk is the information’s susceptibility to a data breach.
“It’s not a question of if they get hacked, it’s a question of when,” said Matthew Erickson of the consumer privacy group the Digital Privacy Alliance.
In September, Facebook revealed that a large-scale data breach had exposed the profiles of around 30 million people. For 400,000 of those, posts and photos were left open. Facebook would not comment on whether or not data from its suicide prevention algorithm had ever been the subject of a data breach.
Following the public airing of data from the hack of married dating site Ashley Madison, the risk of holding such sensitive information is clear, according to Erickson: “Will someone be able to Google your mental health information from Facebook the next time you go for a job interview?”
Dr. Dan Reidenberg, a nationally recognised suicide prevention expert who helped Facebook launch its suicide prevention program, acknowledged the risks of holding and creating such data, saying, “pick a company that hasn’t had a data breach anymore.”
But Reidenberg said the danger lies more in stigma against mental health issues. Reidenberg argues that discrimination against mental illness is barred by the Americans with Disabilities Act, making the worst potential outcomes addressable in court.
Once a post is flagged for potential suicide risk, it’s sent to Facebook’s team of content moderators. Facebook would not go into specifics on the training content moderators receive around suicide but insist that they are trained to accurately screen posts for potential suicide risk.
In a Wall Street Journal review of Facebook’s thousands of content moderators in 2017, they were described as mostly contract employees who experienced high turnover and little training on how to cope with disturbing content. Facebook says that the initial content moderation team receives training on “content that is potentially admissive to Suicide, self-mutilation & eating disorders” and “identification of potential credible/imminent suicide threat” that has been developed by suicide experts.
Facebook said that during this initial stage of review, names are not attached to the posts that are reviewed, but Duarte said that de-identification of social media posts can be difficult to achieve.
“It’s really hard to effectively de-identify peoples’ posts, there can be a lot of context in a message that people post on social media that reveals who there are even if their name isn’t attached to it,” he said.
If a post is flagged by an initial reviewer as containing information about a potential imminent risk, it is escalated to a team with more rapid response experience, according to Facebook, which said the specialised employees have backgrounds ranging from law enforcement to rape and suicide hotlines.
These more experienced employees have more access to information on the person whose post they’re reviewing.
“I have encouraged Facebook to actually look at their profiles to look at a lot of different things around it to see if they can put it in context,” Reidenberg said, insisting that adding context is one of the only ways to currently determine risk with accuracy at the moment. “The only way to get that is if we actually look at some of their history, and we look at some of their activities.”
Sometimes police get involved
Once reviewed, two outreach actions can take place. Reviewers can either send the user suicide resource information or contact emergency responders.
“In the last year, we’ve helped first responders quickly reach around 3,500 people globally who needed help,” wrote Facebook CEO Mark Zuckerberg in a post on the initiative.
Duarte says Facebook’s surrender of user information to police represents the most critical privacy risk of the program.
“The biggest risk in my mind is a false positive that leads to unnecessary law enforcement contact,” he said
Facebook has pointed out numerous successful interventions from its partnership with law enforcement, but in a recent report from The New York Times, one incident documented by police resulted in intervention with someone who said they weren’t suicidal. The police took the person to a hospital for a mental health evaluation anyway. In another instance, police released personal information about person flagged for suicide risk by Facebook to The New York Times.
Carl Court / Getty ImagesThe GDPR requires company to get consent before creating or storing mental health data.
Facebook uses the suicide algorithm to scan posts in English, Spanish, Portuguese, and Arabic, but they don’t scan posts in the European Union.
The prospect of using the algorithm in the EU was halted because of the area’s special privacy protections under the General Data Protection Regulation (GDPR), which requires users give websites specific consent to collective sensitive information such as that pertaining to someone’s mental health.
In the US, Facebook views its program as a matter of responsibility.
Reidenberg described the sacrifice of privacy as one that medical professionals routinely face.
“Health professionals make a critical professional decision if they’re at risk and then they will initiate active rescue,” Reidenberg said. “The technology companies, Facebook included, are no different than that they have to determine whether or not to activate law enforcement to save someone.”
But Duarte said a critical difference exists between emergency professionals and tech companies.
“It’s one of the big gaps that we have in privacy protections in the US, that sector by sector there’s a lot of health information or pseudo health information that falls under the auspices of companies that aren’t covered by HIPAA and there’s also the issue information that is facially health information but is used to make inferences or health determinations that is currently not being treated with the sensitivity that we’d want for health information.”
Privacy experts agreed that a better version of Facebook’s program would require users to affirmatively opt-in, or at least provide a way for users to opt out of the program, but currently neither of those options are available.
Emily Cain, a Facebook policy communications representative, told INSIDER, “By using Facebook, you are opting into having your posts, comments, and videos (including FB Live) scanned for possible suicide risk.”
Most experts in privacy and public health spoken to for this story agreed that Facebook’s algorithm has the potential for good.
According to the World Health Organisation, nearly 800,000 people commit suicide every year, disproportionately affecting teens and vulnerable populations like LGBT and indigenous peoples.
Facebook said that in their calculation, the risk of invasion of privacy is worth it.
“When it comes to suicide prevention efforts, we strive to balance people’s privacy and their safety,” the company said in a statement. “While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible. And we understand this is a sensitive issue so we have a number of privacy protections in place.”Kyle McGregor, Director of New York University School of Medicine’s department of Pediatric Mental Health Ethics, agreed with the calculation, saying “suicidality in teens especially is a fixable problem and we as adults have every responsibility to make sure that kids can get over the hump of this prime developmental period and go onto live happy, healthy lives. If we have the possibility to prevent one or two more suicides accurately and effectively, that’s worth it.”
Have a tip? Email Benjamin Goggin at [email protected] or DM him on Twitter @BenjaminGoggin.
This piece by Benjamin Goggin was first seen on ‘Business Insider Australia‘, 7 January 2019.
]]>Luckily, there’s an app for almost everything, including meditation, mindfulness, and self-care as whole. Here are 5 wellness apps to help you cultivate positive mental health and get you through your day in the best way possible.
Awaken teaches meditation through a series of audio-guided practices, which draw from classic mindfulness philosophy and focus on contemplating real life: Work, relationships, habits, and especially culture and politics – with a focus on unearthing our inner wisdom and applying it to our lives. The app features a social media approach to mindfulness practice, ending each session with a journal prompt and a news feed of responses.
Awaken is a new kind of meditation app that combines mindfulness practice, contemplation, and journaling.
The apps approach applies Buddhist philosophy to our entire lives and employs a social networking approach to encourage community and conversation.
Simple Habit is a free meditation app that offers just that through its five-minute meditation sessions designed to provide a brief respite at any point in your demanding schedule. Calling itself “a daily vacation for your mind,” Simple Habit is goal- and situation-based meditation for busy people.
Simple Habit was created by Yunha Kim, who understood the pressures of a demanding work life all too well. She found mediation to be an amazing way to refocus and center herself.
Meditation, the practice of focusing on your breath to reach a heightened awareness of your current thoughts, emotions, and environment, has been shown to reduce stress, improve focus, boost creativity, and cultivate compassion. At its core, meditation is about calming the mind and bringing you back to the present, so if you feel like you could benefit from this, Simple Habit can help you out.
Shine’s main focus is on personal growth, motivational messaging and other self-improvement topics, which are delivered by way of text and audio. Through short-form audio, users can get help across a number of areas, including things like productivity, mindfulness, focus, stress and anxiety, burn out, acceptance, self-care for online dating, creativity, forgiveness, work frustrations and more.
The app also sends daily motivational texts based on research-backed materials that help users better understand the topic at hand.
Shine is like a best friend and self-help guru rolled into one neat package,
Lifesum is a general wellness app that does everything from letting you share healthy recipes, to helping you track your progress at the gym, to reminding you to drink enough water. In general, it helps promote a healthier lifestyle and keeps you on track with your health goals.
Lifesum is free to download, but you can purchase a three-month, six-month, or annual subscription.
My Possible Self is an app which sets out to make those difficult, necessary conversations about your own mental health much easier.
The app starts with a preliminary questionnaire assessing how you’re feeling, asking you to rate variables such as your recent levels of energy, ability to concentrate and confidence out of ten.
From there, you can build a customised plan, and work through self-help modules designed to help your release any anxiety that comes from stress and change, using a combination of cognitive behavioural therapy (CBT), problem-solving therapy, positive psychology and interpersonal therapy.
The app also encourages you to learn more about your specific triggers, unhealthy practices and tools for maintaining a more manageable lifestyle.
This piece by Ntianu Obiora was first seen on ‘Pulse Nigeria‘, 4 January 2019.
]]>Sarah Bateup analyses how technology-enabled mental health treatments could be used to help those with Type 2 diabetes
A growing number of studies have indicated that patients with diabetes are more likely to experience common mental health problems such as anxiety or depression. Grigsby et al., (2002) reported that patients with diabetes are three times more likely to be depressed than the general population. They also suggested that 40% of patients with diabetes had a diagnosable anxiety disorder. In addition, people with diabetes can also experience disease-specific psychological issues such as denial of the diagnosis, avoidance of treatment-seeking behaviours, fear relating to hypoglycaemia or future complications, eating disorders and adjustment disorder. It is recognised that patients with diabetes respond well to evidenced psychological therapies like cognitive behavioural therapy (CBT).
Ieso Digital Health delivers CBT, via online written communication, to NHS patients across the UK. The online Ieso Method has demonstrated equivalence to traditional face-to-face CBT and has become an accepted convenient and effective way to receive psychological therapy.
Ieso Digital Health, in collaboration with Roche Diabetes Care, is running a research trial to test the assumption that CBT delivered online by qualified therapists using the Ieso Method is equally as effective as face-to-face CBT at:
Anyone over the age of 18 who has had a diagnosis of Type 2 diabetes for at least 12 months and feels that they may anxious or depressed can participate. Participants must have access to an internet-enabled device such as a tablet, laptop, desktop computer or smartphone. Participants must feel confident in their ability to read and write in English, although spelling and grammar are unimportant.
CBT will be delivered by British Association of Behavioural and Cognitive Psychotherapy (BABCP) accredited CBT therapists. The therapists will provide treatment online using synchronous written communication (typed). For further information relating to this method please see www.iesohealth.com . The therapy provided will be identical to traditional face-to-face CBT in that it will use evidenced-based disorder-specific treatment protocols with fidelity to the CBT model.
CBT will be delivered online using the web-based platform provided by Ieso Digital Health. This means that the participants can be located in place of their choosing whilst receiving therapy. Participants may also choose the day and time of day that they have their therapy appointments.
Participants will receive a disorder-specific treatment protocol for patients with an anxiety disorder or depression and a diagnosis of Type 2 diabetes. Participants will receive all elements of the treatment although this may be moderated to meet their individual needs. For example, some patients may find working behaviourally more helpful than working with cognitions. These types of moderations are very normal when delivering CBT. Treatment will consist of weekly therapy appointments lasting 60 minutes. Participants will also be encouraged to participate in between session practice tasks. Between session practice tasks are a routine part of CBT. Average treatment durations are seven treatments sessions over a period of two months, although patients who require more sessions (in order to gain benefit) will be provided with more sessions.
The therapists’ ability to deliver the interventions with fidelity to the CBT model and adherence to an evidence-based protocol will be assessed by a team of clinical supervisors at Ieso Digital Health. This is assessment is possible because the intervention is delivered online using synchronous written communication and therefore each therapy appointment is recorded as a transcript. It is, therefore, possible to quality check the intervention provided to every participant. Fidelity to the treatment model will be assessed using the standardised and validated tool ‘Cognitive Therapy Scale-Revised’ (CTS-R). The CTS-R is routinely used in research settings to ensure that that the intervention provided is CBT and not another type of therapy. The CTS-R is also routinely used in higher education settings as a formative and summative assessment of trainees’ ability to deliver CBT. The CTS-R is a 12-item scale where each item is scored on 0-6 Likert scale where 0 is incompetent and 6 equates to expert skill. The final score is reported as a total percentage. A score of 40% is considered to be competent.
The primary outcome measures used at every session are; the Patient Health Questionnaire (PHQ-9). The PHQ-9 is a 9 item, self-administered questionnaire that measures the severity of depression and the Generalised Anxiety Disorder Questionnaire (GAD-7), the GAD-7 is a 7-item self-administered questionnaire that measures the severity of generalised anxiety disorder.
The secondary outcome measures are the Patient Activation Measure (PAM) and the Diabetes Distress Scale (DDS).
The diabetes distress scale (DDS) is a 17-item scale that captures four critical dimensions of distress: emotional burden, regimen distress, interpersonal distress and physical distress. First published in 2005, it has been used widely around the world as a clinical instrument for an opening conversation with one’s patients as well as a critical outcome measure in numerous studies. (See www.diabetesdistress.org).
The PAM is a tool that helps health care professionals assess a patient’s activation level and their level of knowledge, skill and confidence in managing their long-term health condition. Evidence has demonstrated that when patients are supported to become more activated they are better able to manage their long-term condition and they have better physical health. For further information click here.
This is the first study that investigates the efficacy of online, therapist-delivered CBT specifically for patients with diabetes and a co-morbid mental health condition. The research is timely and necessary now particularly if we are to begin to tackle the dual problem of lack of availability of CBT and an increase in the prevalence of diabetes and mental health disorders.
If you would like more information about this study, or you know someone who would like to take part please contact Sarah Bateup, Chief Clinical Officer at Ieso Digital Health at [email protected].
This piece by Sarah Bateup was first seen on ‘Open Access Government‘, 18 November 2018.
]]>We don’t have clear evidence for why this is the case, but there are several theories. These include that people with mental health conditions may smoke to self-medicate or to cope with social exclusion. People with mental health conditions are also more likely to have lower levels of education and higher levels of unemployment, which are accepted risk factors for smoking.
Despite huge gains in getting Australians to quit since the turn of the century (22% of Australians smoked in 2001), people with a mental illness appear to have been left behind. They are a big group to overlook. More than 4 million Australians are living with mental health conditions, including anxiety, depression and psychosis.
People with mental health conditions are at a much higher risk of chronic physical conditions, and more likely to die prematurely as a result. People with severe mental health conditions are at risk of dying from heart conditions and cancer 10 to 15 years earlier than the rest of the population. Smoking is undoubtedly contributing, being a key risk factor for heart disease, stroke, cancer and a range of other conditions.
Some health professionals may have put smoking in the “too hard basket” for certain patients with complex or urgent health needs. But there is good evidence people with mental illnesses want to quit smoking, that they are capable of quitting, and that smoking causes stress rather than relieving it. Research also shows quitting does not exacerbate poor mental health, but rather improves it.
Imagine a health professional telling you to keep smoking to help manage your disease. Until as recently as late last century, this was the case for some people being cared for in psychiatric facilities. There are even reports of patients entering hospital care as non-smokers only to be discharged later with the potentially lethal habit. While a culture of smoking is no longer encouraged in mental health facilities and hospitals, in some it is not actively discouraged.
People with severe mental health problems may spend time in an Australian hospital which is meant to be smoke-free. But are they really free of tobacco smoke? Despite nearly every Australian hospital having a smoke-free policy, we know implementation and enforcement have been patchy.
We need to do more to enforce smoke-free policies and use these as an opportunity to help patients quit. Research shows total smoking bans in mental health facilities help people quit when they are supported with appropriate nicotine-dependence treatment.
While a smoke-free environment will hopefully remove temptation, hospital stays also give health professionals an opportunity to discuss smoking with patients. This can include whether the patient wants to quit, how they want to do it, and what sort of therapies are available to help them with the process.
Post-discharge support is also crucial. All patients should be referred to services such as Quitline when they leave hospital.
We know 4 million people with mental health conditions are much more likely to smoke, but only a small proportion of them need psychiatric hospital care. Other mental health services, including general practice clinics, should promote the benefits of quitting if they are not doing so already.
People with mental health conditions want to improve their physical health and address risk factors causing ill health. However, mental health providers often don’t see this as their job while they concentrate on improving a patient’s mental health.
Given we know quitting smoking will improve patients’ mental health, it’s important all services embed brief models of preventive care into their standard practice. Proven, effective strategies assess the patient’s nicotine dependence, offer personalised advice and assistance, and provide referral to behaviour change supports. These strategies are simple and don’t take much time.
A trial at a mental health service in New South Wales has offered mental health clients the opportunity to discuss their lifestyle with a nurse “coach” who gave advice and support for issues such as smoking, diet and physical activity. Evaluations of this model undertaken so far demonstrate it is popular with patients, inexpensive and can be effective.
People living with mental health issues are interested in improving their own physical health, but quitting smoking isn’t easy. Just like everyone else, people with mental health conditions need help and support.
Despite this need, there is a reported unwillingness or ambivalence among some mental health practitioners to address risk factors such as smoking among their patients, and there is no systematic approach in services to provide support to quit smoking.
Health practitioners and services have a critical role to improve health overall. Helping people to stop smoking is still the best thing we can do to support a longer, healthier life.
The fifth National Mental Health Plan, as well as state and territory plans, call for more action to prevent early death and chronic disease among people with mental health conditions. Combating smoking is essential to achieving this goal, and should be incorporated into the provision of care for this population at all levels.
This piece was originally seen on ‘The Conversation‘, 7 November 2018.
]]>As a psychological scientist who has been studying the effects of mobile technology on well-being for the past five years, I can only welcome these new tools. Indeed, a great deal of research has documented how smartphones might be harming people’s sleep quality or distracting them from nondigital activities. In my own experimental research, my collaborators and I have found consistent evidence that smartphones can also distract users from the family and friends right in front of them, such as when sharing a meal or spending time with their children.
In situations that clearly call for limiting digital distraction – like playtime with kids – Apple’s and Google’s new tools will offer a convenient solution. Yet, my research suggests that smartphones may be making us less happy in a much wider range of social situations than we might expect.
The crux of the matter is that people, as it turns out, fail to judge what economists call “opportunity costs” – the value of what someone gives up when they make a choice to do one thing and not another.
For example, in a series of studies I conducted with Jason Proulx and Elizabeth Dunn at the University of British Columbia, we found that people neglect a key side effect of relying on their phones for information: They miss out on chances to boost their sense of social connectedness. Using a mobile map app, for example, obviates the need to rely on other people, removing the opportunity to experience the kindness of a stranger who helpfully provides directions to a store or movie theater.
It is easy to see how completely forgoing social interaction for technological convenience can hurt someone’s social well-being. But most people use their phones precisely to socialize – often while simultaneously socializing with others in person. Perhaps it’s having a drink with a co-worker while also Snapchatting with a friend, texting with a partner, or even setting up a new date through Tinder or Grindr. One may think that socializing with more people simultaneously is better.
But my collaborator, Samantha Heintzelman, and I recently found that combining digital and face-to-face socializing is not as enjoyable as putting down the phone and just spending time together.
In a study at the University of Virginia, we tracked the social behavior and well-being of 174 millennials over the course of a week. At five random times each day, we sent each person a one-minute survey to complete on their mobile phone. We asked what they had been doing in the previous 15 minutes, including whether they were socializing in person or digitally (such as by texting or using social media). We also asked how close or distant they were feeling to other people, and how good or bad they were feeling overall.
We weren’t particularly surprised to find that people felt better and more connected during times when they only socialized face-to-face, as compared with when they weren’t socializing at all. This fit with decades of existing research. We didn’t find any benefits of digital socializing over not socializing at all, though our study wasn’t designed to explore that distinction.
We did find, however, that when socializing face-to-face only, people felt happier and more connected to others than when they were socializing only through their phones. This is notable because the people in our study were the generation of so-called “digital natives,” who had been using smartphones, tablets and computers to interact since very young ages. Even for them, the benefits gleaned from good old face-to-face talking exceeded the well-being of digitally mediated communication.
Most critically, people felt worse and less connected when they mixed face-to-face with digital socializing, compared to when they solely socialized in person. Our results suggest that digital socializing doesn’t add to, but in fact subtracts from, the psychological benefits of nondigital socializing.
As people’s useful digital devices start to provide more and better options for limiting screen time and staving the flow of digital interruptions, deciding when to use those powers is neither obvious nor intuitive. Behavioral science provides some promising solutions to this predicament.
Rather than having to decide activity by activity when not to be interrupted, people could make Do Not Disturb the default, only seeing notifications when they want to. My recent research – with Nicholas Fitz and Dan Ariely at Duke University’s Center for Advanced Hindsight – suggests, however, that never receiving notifications hurts well-being by increasing fear of missing out. The best way is the middle way: We found that setting the phone to deliver batches of notifications three times a day optimized well-being. To set their users up for optimal psychological benefits from both their digital and nondigital activities, Google and Apple could make batching notifications easier.
Google and Apple should also expand their proactive recommendations for managing interruptions. The iPhone, for example, already offers the option to automatically turn on Do Not Disturb while driving, and in the forthcoming features, while sleeping. The growing evidence on how smartphones are compromising well-being during social interactions suggests that social and family time also warrants protection from digital disturbance.
People spend more time in the company of their digital gadgets than with friends and even romantic partners. It is only fair that these devices should learn more about what makes people happy, and provide a chance to reclaim the happiness lost to digital activity – and from the companies that need people’s attention to thrive.
This piece by Kostadin Kushlev was originally seen on ‘The Conversation‘, July 10, 2018.
]]>A welcome conversation surrounding mental health has arisen but as more people make the decision to reach out, too few find a supportive hand.
Not a week passes without a report on Ireland’s mental health system, where lengthy waiting lists, staff shortages and inadequate facilities are the rule rather than the exception. Minister of State with special responsibility for mental health Jim Daly recently announced plans to pilot mental health “web therapy”; signalling a growing recognition of the need for novel approaches.
The capabilities of technology in the mental health sphere continue to flourish and developing therapeutic applications based upon systems driven by artificial intelligence (AI), particularly chatbots, is one arena that’s rapidly expanding. Yet, if you needed to open up, would you reach out to a robot?
While not specifically focused on AI, a study from the Applied Research for Connected Health (Arch) centre at UCD shows 94 per cent of Irish adults surveyed would be willing to engage with connected mental health technology.
Study co-author Dr Louise Rooney, a postdoctoral research fellow at Arch, says AI-based systems with a research and a patient-centred focus could be beneficial.
“I don’t think AI is the answer to everything or that it could fully replace therapy intervention but I think there’s a place for it at every juncture through the process of treatment; from checking in, to remote monitoring, even down to people getting information,” she says.
The latest Mental Health Commission report shows waiting times for child and adolescent mental health services can reach 15 months. Rooney believes AI-based therapy could be particularly useful for young people who “respond very well to connected mental health technology”. The anonymity of such platforms could also break down barriers for men, who are less likely to seek help than women.
Prof Paul Walsh from Cork Institute of Technology’s department of computer science feels that AI-driven tools can “improve the accessibility to mental health services” but won’t fully replace human therapy.
“For those who are vulnerable and need help late at night, there’s evidence to show [therapy chatbots using AI and NLP] can be an effective way of calming people,” says Walsh, who is currently researching how to build software and machine learning systems for people with cognitive disorders. “If someone’s worried or stressed and needs immediate feedback, it’s possible to give real-time response and support without visiting a therapist.”
Professor of psychiatry at Trinity College Dr Brendan Kelly says AI-based platforms such as chatbots can help people to take control of their wellbeing in a positive manner.
“They can help people to take the first step into an arena that may be scary for them but I feel there will come a point that this is combined with, or replaced by, a real therapist,” adds the consultant psychiatrist based at Tallaght Hospital.
Using AI-driven mental health therapy doesn’t come without concerns, one being privacy.
“Clearly it’s a very important issue and people shouldn’t use something that compromises their privacy but it’s not a deal-breaker,” says Kelly. “There are ways to ensure privacy which must be done but [fears and challenges] shouldn’t sink the boat.”
Being completely transparent with users about data collection and storage is key, Rooney adds.
Whether AI can determine someone’s ability to consent to therapy is another potential caveat raised by Rooney. However, she feels that forming “watertight legislation” for this technology and ensuring it’s backed by research can help to overcome this and other potential pitfalls.
While most current tools in this field focus on mental wellbeing and not severe problems, Walsh raises the potential of false negatives should AI decide somebody has a chronic illness. To avoid this, it’s important to keep a human in the loop.
“Many machine-learning systems are really hard to analyse to see how they make these judgements,” he adds. “We’re working on ways to try to make it more amenable to inspection.”
As potentially anybody can engineer a system, Walsh recommends avoiding anything without a “vast paper trail” of evidence.
“These will have to go through rigorous clinical trials,” he says. “We need policing and enforcement for anything making medical claims.”
Humans could become attached to a therapy chatbot, as was the case with Eliza, a chatbot developed at Massachusetts Institute of Technology in the 1960s. However, Walsh doubts they will ever be as addictive or as great a threat as things like online gambling.
While the sentiment that AI-based therapy will assist rather than replace human therapy is quite universal, so is the view it can have a great impact.
“Achieving optimum mental health involves being open to all different ingredients, mixing it up and making a cake. AI can be part of that,” says Rooney.
If well regulated, Walsh says AI can augment humans in terms of treating people.
“I’m hopeful that benefits would be accentuated and the negatives or risks could be managed,” says Kelly. “The fact that it’s difficult and complex doesn’t mean we should shy away, just that we must think how best to capture the benefits of this technology.”
Stanford psychologist and UCD PhD graduate Dr Alison Darcy is the brains behind Woebot: a chatbot combining artificial intelligence and cognitive behavioural therapy for mental health management.
“The goal is to make mental health radically accessible. Accessibility goes beyond the regular logistical things like trying to get an appointment,” explains the Dublin native, who conducted a randomised control trial of Woebot before launching. “It also includes things like whether it can be meaningfully integrated into everyday life.”
Darcy is clear that Woebot isn’t a replacement for human therapy, nor will he attempt to diagnose. In the interest of privacy, all data collected is treated as if users are in a clinical study.
Not intended for severe mental illness, Woebot is clear about what he can do. If he detects someone in crisis, Woebot declares the situation is beyond his reach and provides helplines and a link to a clinically-proven suicide-prevention app.
Originally from Wexford, Máirín Reid has also harnessed the capabilities of AI in the mental health sphere through Cogniant. Founded in Singapore with business partner Neeraj Kothari, it links existing clinicians and patients to allow for non-intrusive patient monitoring between sessions.
It’s currently being utilised by public health providers in Singapore with the aim of preventing relapses and aiding efficiency for human therapists. As Cogniant is recommended to users by human therapists, decisions on consent capabilities are formed by humans.
“Our on-boarding process is very clinically-driven,” says Reid. “We’re not there to replace, but to complement.”
While not intended for high-risk patients, Cogniant has an escalation process that connects any highly-distressed users to their therapist and provides supports. There’s also a great emphasis on privacy and being transparent from the offset.
“Clinicians are saying it drives efficiency and they can treat patients more effectively. Patients find it’s non-intrusive and not judgmental in any form.”
This piece by Amy Lewis was first seen on ‘The Irish Times‘, 23 August 2018.
]]>
KIEV, UKRAINE – August 25: Different popular social media icons in a group folder on android smartphone homescreen, in Kiev, Ukraine, on August 25, 2014.
Google today is expanding YouTube’s set of “digital wellbeing” tools, with an added feature that will calculate how much you’re watching videos. The idea is that this will allow users to take better control over their viewing behavior and place limits on their time spent on YouTube by way of other app features that remind you to take a break. The “Time Watched” feature, rolling out today, will inform YouTube users how much they’ve watched today, yesterday, and over the past 7 days, says YouTube.
The company, along with Apple and Facebook, have more recently begun to take responsibility for the addictive nature of their devices and services which were designed to exploit vulnerabilities in human psychology, and are now facing the unintended consequences of those decisions.
At Google, the company is now addressing digital wellbeing across its products, including Android, Gmail, Google Photos, YouTube and elsewhere.
At its Google I/O developer conference earlier this year, it introduced a series of controls for YouTube viewers, including reminders to pause your viewing (“Take a Break”) and those that would disable notification sounds for periods of time, and allow you to receive notifications as a digest.
At the time, Google said it was “soon” preparing to roll out a “Time Watched” profile that will appear in the Account menu – that’s what’s new as of today.
When the feature arrives, you can visit your profile in the account menu to see your stats, including time watched over various time frames, as well as your Daily Average. This information is calculated based on your YouTube watch history, the company says.
That means if you have deleted videos from your history or watched in Incognito Mode, that viewing won’t be counted. Additionally, if you pause your history, you’ll also be unable to track your stats.
It was initially unclear if YouTube TV watch history is being counted. A screenshot shared by Google today says it’s not, nor is YouTube Music. However, the answer on the YouTube Help support site Google linked to in a blog post says it is.
Google first told us that YouTube TV was not counted, then later corrected itself to say that was a mistake after we published. YouTube TV view is, in fact, counted, a spokesperson confirms. The company is now making the necessary corrections, as a result of our inquiry.
It’s unfortunate that YouTube TV is counted, as that’s essentially watching TV – something that’s associated with longer programming blocks, and more time spent viewing. At the very least, this should be broken out separately from YouTube itself.
YouTube responded that TV time is currently being included because, by design, its usage was included in the main app’s History section. In time, however, the company aims to make changes so it’s no longer counted. But it doesn’t have a time frame yet for its removal.
The new feature is rolling out starting today, YouTube says.
This piece by Sarah Perez was first seen on ‘Tech Church‘ August 28, 2018.
]]>