The AI train and ethical tracks
Where ever you turn – whether it be at home, at work, in a public space, in the GP’s surgery, or even in surgery………. AI in some way shape or form is playing a part. In the legal world there has been much talk, about the potential of AI to revolutionise legal services. Both in B2B and B2C there is much hope that technology can drive not just efficiencies, but whole scale rethinks about how to better deliver justice and a dependable, trustworthy legal systems. A challenge facing many jurisdictions, including the those across the UK, is how to enable access to legal service, to those who cannot currently afford to do so. These people – individuals, small businesses, families, – are currently locked out of one of the core pillars of a functioning society, and are unable to give effect to their rights, or manage their affairs effectively. Termed as ‘unmet legal need’ it is a growing problem and there is a hope that AI powered legal insights can help bring access to law to these excluded communities. But how to do so, in a way which protects rights? enhances the rule of law rather than watering it down? and encourages investment in solving problems of society for the long term? Can AI’s deployment be guided?
AI is big money and thousands of starts ups, big tech companies are investing huge amounts of money, AI is sewn into the fabric of our every day lives, without it being obvious. ChatGPT, Bing, Bard……the list will grow, as will the excitement, and for some the concern.
Some characterise the growth of AI applications as an unstoppable force, gaining greater and greater impact on our every day lives, like a runaway train, with no one at the wheel. This proposition extends to argue that this momentum will bring benefits to many, such as better diagnostics, or access to legal advice, and that the ‘downsides’ are a cost of such progress, and in any event can’t be controlled. I call this position ‘hopeful fatalism’ – ‘I can’t do anything about so I’ll just hope for the best, things normally even out’
On the other hand we have what I call ‘responsible optimism’ – the group of people who believe that AI’s deployment has great potential to help solve many of the genuine problems of people and planet which exist, but that to ensure we maximise benefits and manage the potential harms, we need to take deep care with our development of AI – that we ought to take responsibility for the choices which are made, and we ought to try and control the speed of the train.
There is of course a third camp in this debate, what i refer to as’ pessimistic fatalism’ –‘ it’s going to end very badly for many, and there is nothing we can do about it – the train is unstoppable’. The arguments within this camp centre around the asymmetry of power : AI’s deployment will exasperate the current asymmetries of power, resulting in the even greater concentration of power and benefit in the hands of very few, and in turn will be so overwhelming that the very fabric of our societies will be tested and will, in cases, fail. The results – fundamental challenges to democracy, to truth, to equity and freedom.
Ethics 101
At the heart of this debate are the ethical questions raised which are far from black and white.
Take for example the question of the future of work. AI has the potential of, in time, replacing the work of some people in some professions. This might be seen as a good thing, if the reason for ‘replacement’ was accuracy, speed or cost. In law we are constantly reminded of how many people have no access to legal help, what if those legal problems could be avoided entirely, what if the problems if they arose could be speedily and cheaply resolved – surely that is a good thing. But what of those who used to do those very same jobs. But – what of the author, the song writer, the actor, the lawyer, the call centre handler, the air traffic controller whose jobs we might be talking about? What of a society where swathes of communities are left with no income? Whose interests should guide the decisions taken? If we choose one constituencies interests over the others, how do we as a society compensate or support the other? How do we take decisions which consider the longer term, but are accepted in the short term? Having an ethical approach involves surfacing these dillemas, exploring them transparently and taking informed decisions based on the best possible insights. Although incredibly hard it can be done,. But by whom? Who is currently making those trade off’s? who should be making the decisions? who is held accountable by who for such choices? How do design to ensure that the right balance of inclusion in these debates?
The ‘hopeful fatalist’ would take the view that someone, somewhere will eventually work out the answer. The ;pessimistic fatalist’ would take the view that there is a vacuum of meaningful public consideration of the issues, and that only the interests of the most powerful will be served, and the ‘responsible optimist’ argues that although there is a vacuum of proper consideration at the moment, this need not persist.
There are conditions which need to be put in place to make this plausible however. This debate is a blend of technology insights and drivers, ethical and social implications, economics and political economy considerations. To deliberate fully, to understand the issues at play in their entirety, requires that we have all of these expertise working collaboratively, and that these explorations are open and deliberative in approach. In the same way as development of AI itself has reimagined the world around us, we need to reimagine how to resolve such policy questions.
It won’t be easy indeed this is likely to be one the most challenging issues to face societies right across the world, which is why the second condition of success is the ability and resolve to take the long view, to not be tempted by the jam available today. Easier said than done but it is an essential requirement of being able to genuinely shape and guide the use of AI in service of creating the world in which we want to live.
In service or out of control
A possible framing for many of the questions raised when concerns about AI are raised, is to consider whether the technology is designed and delivered in such as way which will genuinely help to solve a real problem, or is it pushing or even creating new problems further up or down stream. Just because we can, doesn’t mean we should in some cases, especially when the displaced impacts are significant. Take for example the much reported concerns about the use of AI based tools to judge recidivism in criminal justice cases in the USA. Poor data in poor data out – in the real world poorly designed versions of these tools had impact on peoples liberty and level of faith communities have in their justice systems. Are these tools therefore truly serving justice?
Returning to the unmet legal need challenge – imagine a world where disputes could either be resolved without having to go all the way to court, where, even better, disputes are avoided all together. Imagine a world where finding out what your rights are as an individual and the ability to get expert advice on important life events was quick, easy and accurate?
These, most people would agree, are real upsides, so long as there are guard rails. Designing such systems, with a full understanding and appreciation of ethical and societal impacts is vital – a chatbot might sound all good and well, digital only justice might sound super efficient, but we risk to loose much which builds trust and equity in the justice system if applications do not take into account issues such as an understanding of rights, the implications for the rule of law, of the importance of transparency, or parity of arms, of the impact on a system built on precedent (maybe we will need a rethink in time?). The responsible optimist would argue that this is possible and that we can use this power of AI to begin to plug some of the ‘unmet legal need’ gap.
The call to action
So what now you might ask? This all seems rather heavy, but this issue is entirely man made, and therefore we can, if we choose, take control of the train, guide it along ethical tracks to the destination we want, at the pace we need, in the manner most suited to arriving in a place that feels better than the place we left.
So here is an easy checklist, based on 4 pen portraits, of a few simple steps you can take:
Developer in a start up : Run a brown bag session with the rest of your start up (all of them, not just the developers) and have a conversation about what you building, why and what problem you are seeking to solve. What are the possible implications that you had not thought about? Better still invite someone external from a completely different background to join you in the discussion
Developer / researcher in a large tech company: go find out who in your company leads on ethical assessments? Invite them to a team session and explore and better understand the ethics framework and also how to enhance it. Invite representatives of at least two other teams, form different disciplines, to join you.
Policy expert / researcher: if you don’t have people with tech and industry experience in your team, go forge a partnership. you don’t need to hire a team of data scientists and researchers, but you need them as your allies and insights.
Jo bloggs: you’re not in a tech role, nor a policy role, but doesn’t mean you can rest on your laurels. Your task is the most important. In a democracy the voice of society matters, and you need to raise yours. Be curious for instance at home or work, ask how and where AI is deployed, what do you think the up sides and down sides are? There is no need to bury your head in the sand or feel helpless – learning about the implications is something we all need to do – so that when the times comes to give governments and businesses permission to take longer term positions you can do so from a position of knowledge, and power.
Good luck and hope that this short piece inspires you. The potential gain from AI for society as a whole is immense if we take the reigns and ensure that we guide it along ethical pathways, and make conscious choices of the things we will and the things we won’t do.