My Journey into AI Policy

Thorin Bristow – 17 Aug 2022

(Originally published on LinkedIn.)

The past year has been a whirlwind. A journey characterised by enormous growth and experience, both professionally and academically. I chose to write this post as a way of updating my network – to explain where my mind has been and where it is currently. I hope to highlight why I think artificial intelligence (AI) is so important, and why I believe a focus on the ethical development of AI, and effective governance and regulation, is so necessary going forward. This post is ordered as follows:

Why AI Policy?

I consider AI to be the capacity for machines to carry out tasks usually considered to require intelligence or human cognition, providing unprecedented efficiency that has the potential to amplify both the benefits and downsides of digital technology for society.

Advances in AI are sufficiently powerful to benefit everyone. For example, AI has the potential to (i) revolutionise healthcare by bringing modern medicine and diagnostic expertise to under-privileged and impoverished communities, (ii) reform education through personalised approaches and adaptive learning, and (iii) provide solutions to global food needs using precision agriculture. Nevertheless, despite the potential AI presents, I am sceptical of our collective ability to achieve the political consensus required to reap the full, equitably distributed benefits of AI. The fact is, current developments in technology may also worsen social and economic inequalities. Those who stand to benefit most from advances in AI and digital technologies are the wealthy (in accordance with Matthew effect of accumulated advantage) and those with specialist skills whose careers are not easily automated. These privileged groups represent a minority that could possess a marked advantage in the future over the rest of humanity.

More abstractly, the capacity of AI to effectively understand humans better than we understand ourselves concerns me and, in my view, places the very idea of individual sovereignty into question. Even simple machine learning algorithms can be employed to precisely target specific groups with personalised advertising and media content, or to spread mis- and disinformation on behalf of political powers, all with the ethically questionable intention of manipulating our behaviour. These technologies have the potential to radically change our world, with the dichotomy between progress and regression being both vast and subtle. Advances in AI are moving so rapidly that we risk leaving social institutions, and our democratic processes, behind.

I am concerned that we will outsource our moral responsibilities to sophisticated machines, and that corporate interests will ultimately decide the direction of AI research. Managing oversight and accountability is crucial, as questions regarding legal duty of care become increasingly opaque. Additionally, the lack of cultural and socioeconomic diversity in AI research could encode inherent discrimination and implicit bias into Al models, escalating social inequalities. I further anticipate that automation has the potential to be detrimental rather than empowering for society if there is no strategy for replacing employment with something better. Moreover, the development of autonomous weapon systems is deeply concerning, especially considering that their low cost, low risk, high secrecy, and permissive lack of accountability are all characteristics likely to encourage less reserved application of military force. At the risk of sounding polemic, I worry that indecision and inaction on the part of policymakers may leave developments in AI at the whim of market forces, resulting in unrestrained and unregulated disruption to what appears to be an increasingly delicate social order.

I believe that we are at an inflection point, where regulation is needed more than ever if we are to create a future in which AI serves for the welfare and collective aims of society. The merger of bio- and information technology is likely to become the defining socio-technological issue of my lifetime. Consequently, I am excited play a role in defining the rules of an emerging technology with far reaching sociological implications, and to be involved in seeking resolutions to yet undefined problems.

My Journey in Brief

My current interest in AI grew during the early days of the Covid-19 pandemic (summer of 2020). Yuval Noah Harari's books, particularly 21 Lessons for the 21st Century, turned my attention to the future (and present!) of AI and digital technologies, and their potential impacts on society. I came to understand ‘technological disruption’ as one of the most pressing issues of my generation. Synthetic media is one example that sparked my interest after listening to a podcast interview with Nina Schick. I felt a sense of urgency once I recognised the disruptive potential of this technology for society and politics. This led me to search for programs that would help me segue from physics into AI regulation and the digital technology policy space. Like many of my peers, I want to use my career for good. Responsible innovation and the development of AI for social benefit is becoming an increasingly important area.

My thesis project for my previous masters was in computational condensed matter physics, the products of which we see everywhere in our digital world: mobile phones, LED lights, computer chips, etc. I have now transitioned my academic focus to how this area of science, and digital technologies including AI, impact the lives of real people. This is where careful consideration is required for shaping policy, as well as in the conception and design of these technologies. Advantageously, my scientific background also involved a strong programming element. For example, I have used cellular automata to model traffic flows in theoretical physics; and molecular dynamics simulations to model interfacial water with the National Graphene Institute.

In identifying which problems in the world are most pressing, I consistently return to social issues that impact the everyday lives of real people. Al and our rapidly changing technological and social ecosystem is an area that resonates, and in which I can also apply my scientific training to help make informed judgements and decisions. Notwithstanding my technical competency, my ultimate aim is to contribute to policy design. The current landscape repeatedly demonstrates the shortage of trained professionals who can address nuanced ethical and policy issues in science and technology.

My interest in privacy, policy, and civic engagement led me to work at an ethics-oriented data privacy company in early 2021 as a copywriter and policy analyst. In this role I analysed existing privacy policies of numerous organisations, including Facebook and Amazon, and conducted research on ethical issues in data and AI that addressed themes of privacy, opacity, responsibility, and bias. Then, to broaden my understanding of public policy and AI regulation, I sourced and completed a 5-week online course titled “Digital Data Privacy: Law and Practice”. This course gave me the opportunity to draw connections between digital technology, law, policy, and regulation in the context of data privacy; investigating themes of surveillance and biotechnology in public life, using COVID-19 interventions as a case study.

Present Work

In 2021, I was awarded the DeepMind Science, Technology, and Society Scholarship – DeepMind’s first scholarship for ethics in AI – to undertake my second master’s degree at the Department of Science and Technology Studies at University College London (UCL). Science and Technology Studies (STS) is a unique field, focussing on the intersection of history, sociology, and philosophy of science and technology, including science policy and communication; and related issues regarding expertise, values, governance, and ethics. This interdisciplinary course has contextualised my physics background in sociological research and public policy, allowing me to focus on the ethical applications of emerging technologies.

I am currently working on a dissertation project under the supervision of Turing Fellow Dr Melanie Smallman at UCL centred on AI, technology policy, and socioeconomic inequalities. The project involves secondary research: critically analysing and comparing policy documents and ethical frameworks, with a focus on mitigating adverse social outcomes of emerging technologies based on socioeconomic characteristics. Whilst algorithmic bias and discriminatory outcomes of AI models have been well documented in the context of race and gender, comparatively less work has been done in the context of socioeconomic inequality. At present I am examining the differences between UK policy (National AI Strategy) and EU policy (AI Act), and considering what these different approaches for governing AI represent in the broader political sphere; in addition to investigating the narratives that drive regulation and deregulation, and the regulatory bottleneck wherein there exists an abundance of policy advice but relatively little actual governance.

The Future

Regarding AI, I consider myself neither optimistic nor pessimistic about the future. I think it is important to have a balanced attitude, and for policymakers and technologists alike to hold two seemingly contradictory viewpoints at the same time. On the one hand the potential benefits of AI for helping make a more equitable future are enormous, whilst on the other the downsides and dangers are very real. Only by having a measured view of both the utopian and dystopian versions of the future can we move forward in an intentional way: with a vision of the future to aim at, and a vision of the future to cautiously avoid.

This year has equipped me with valuable insights and a wider professional network. It has been a major step towards finding a role that contributes to the collaborative effort of placing the ‘human’ back into technology and advancing ethical thought within the sector. My long-term goal is to work at the intersection of AI, technology, society, and ethics, to provide specialist advisory services to policymakers and leaders. The master’s at UCL, supported by the DeepMind Scholarship, has helped me develop the knowledge, skillset, and network to move forward in realising this ambition.

In the coming months I will be completing my dissertation, and evaluating which roles are best suited to helping me achieve my goals and realise my commitment to AI governance. I am uncertain where this will lead me, but I am excited to find out as I continue to connect with and be inspired by brilliant individuals who share this vision of a more equitable future.


Back to home