In my last post I discussed a partnership between Google and The Samaritans which aims to provide clear signposts to the Samaritans’ helpline and website at the top of suicide-related Google Search results. This partnership shows Google taking social responsibility for the power they have in the search engine market. Following on from this, I have been exploring Google’s social and corporate responsibilities in relation to mental health and technology.
Google was in the news earlier this year after an investigation by The Times found that brands were unwittingly advertising on sites by Islamic extremists, pornographers and white supremacists through Google’s programmatic advertising (Mostrous, 2017). This raised the issue of Google’s Corporate Social Responsibility in relation to advertising within the media.
Corporate Social Responsibility (CSR) arises from two distinct assumptions about how the common good can be achieved within a capitalist society. One model of CSR is based on the principal of a self-regulating market, whilst the other stresses the need for government regulation. Notably, Google does not – like the majority of companies – publish a comprehensive annual CSR report (Sandoval, 2014). This makes it difficult to analyse the company’s attitude to its social responsibility.
That said, Google does provide information about its social initiatives via Google.org and environmental initiatives via Google Green. In addition, Google’s Code of Conduct is a policy that dictates how the corporation and its employees behave towards customers, other employees, and how to behave responsibly in general.
But what about specifically in the mental health sector?
In 2015, Joshua Gordon left his post as the Director of the National Institute of Mental Health to join Google Life Sciences in order to investigate how technology can help diagnose and treat mental health conditions. This is another example of Google self-regulating its responsibility to the mental health sector, much like the partnership with and free advertising for The Samaritans.
Whilst Google’s responsibility in the field of mental health technology is self-regulating, Arndt & Scherr (2016) criticise the algorithm Google have created for their agreement with the Samaritans for being limited to a certain number of visitors. They present a new algorithm to increase the frequency of suicide-prevention results being presented, optimised to reach vulnerable individuals.
Given the market dominance Google have and their subsequent control over information seeking and information retrieval, they have become powerful gatekeepers (Shoemaker & Vos, 2009). As such we might argue that they have a responsibility to do more. Whilst their algorithm and partnership with the Samaritans is admirable, there is much room for improvement by providing support for other mental health disorders, to be discussed in my next post.
Arendt, F. & Scherr, S. (2016) Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach, Health Communication, 1532-7027, DOI: 10.1080/10410236.2016.1224451
Mostrous, A. (2017). Big brands fund terror through online adverts. The Times. [online] Available at: https://www.thetimes.co.uk/article/big-brands-fund-terror-knnxfgb98 [Accessed 23 Apr. 2017].
Sandoval, M. (2014) From Corporate to Social Media: Critical Perspectives on Corporate Social Responsibility in Media and Communication Industries, Oxon: Routledge.
Shoemaker, P., & Vos, T. (2009). Gatekeeping theory. New York, NY: