Hands type on laptop while screen shows code

Companies should focus on AI ethics even if it hits profits, says Microsoft UK director

Companies must refuse to create artificial intelligence that is unethical and could harm humanity, even if it affects their profits, a senior director at Microsoft UK has said.

Hugh Milward, Senior Director of Corporate, External and Legal Affairs, said businesses need to “draw a line” on what is acceptable when developing cutting-edge technology and understand their responsibilities.

“Just because something can be done, doesn’t mean it should be done,” he told the Tech UK Digital Ethics Summit in London.

The event heard from leading figures in business and government on how the UK can remain a global leader in building digital services and technology, and discussed how they should be used for the benefits of everyone.

Hugh Milward speaks at Tech UK Digital Ethics Conference

Hugh Milward speaks at the Tech UK Digital Ethics Summit

“It is essential to build trust in technology such as AI,” Milward said. “People can see the benefits of it in sectors such as healthcare but they also have concerns around invasion of privacy and job losses. They can see change is coming and they want governments and society to be involved in that.

“There are three key aspects of AI development we need to look at: building ethical principles, regulation of facial recognition, and helping people develop the digital skills they will need to thrive in the workplace, as well as the human skills that make us who we are, such as empathy and critical thinking.”

AI is already being used by the NHS and major companies such as Marks & Spencer to improve how they work. Milward predicted that the demand for the technology would grow, pointing to research released by Microsoft showing that nearly half of bosses believe their business model won’t exist by 2023. While 41% of business leaders believe they will have to dramatically change the way they work within the next five years, more than half (51%) do not have an AI strategy in place to address those challenges, he said.

However, the world is currently in a unique position whereby society is trying to address a technological leap forward as it is still happening, Milward added.

“Governments, companies and organisations are trying to intervene and regulate AI as this technology is developing, and I don’t think we’ve ever seen that before,” he said. “Historically, the gap between the adoption of technology and regulation has been quite wide, but now it’s very narrow. That’s a good thing. Ideally, we need to make it as narrow as possible.”

Milward was speaking after Information Commissioner Elizabeth Denham and Margot James MP, Minister of State for the Department for Digital, Culture, Media and Sport. The latter used her keynote speech at the Tech UK event to highlight the “amazing opportunities, prosperity and growth” that technology offers, “but only if they are developed ethically and responsibly and retain the trust of society”.

The Government set up the Centre for Data Ethics and Innovation earlier this year to help the UK unlock the benefits of AI while advising ministers on how to develop it safely. “The countries leading the AI debate will remain competitive in a very competitive world,” she said.

Kate Rosenshine, Cloud Solution Architect Manager at Microsoft, said the UK can gain an advantage by learning from how biotechnology has evolved over the past three decades.

Kate Rosenshine at Tech UK Digital Ethics Conference

Kate Rosenshine speaks at the Tech UK Digital Ethics Summit

“Thirty years ago, biotechnology was at a similar stage to AI today – it was showing great promise in its ability to alter the course of human development,” she said in her Tech UK keynote. “Scientists realised the importance of talking to the public about the technology, to give them undistorted information. This, in turn, helped policy makers make informed decisions regarding the ethics and regulations in the biotech field.

“The industry came together, with multiple stakeholders, and the effects of the resulting guidelines are still being felt today, as the general public participates in scientific discourse. This allowed for progress in the field with more public support, and has brought many benefits to the public.”

When it comes to AI, part of the discussion will focus on the collection and use of data, which is critical to the technology’s development and effectiveness. One of the key considerations for companies is how to reduce the amount of bias in algorithms in order to ensure that AI benefits everyone. Rosenshine believes the answer to that issue lies with humans.

“When we design AI systems, we need to avoid bias as much as possible,” she said. “We need to think about where data comes from and how and for what purpose it was collected; so, we need humans to physically review that process. However, those systems also need to represent everyone, but they can’t if the teams creating them are not diverse.

“Machines are good at recognising patterns and scale, but humans are good at reasoning and learning from just a few examples. By working together and allowing AI to reinforce what humans are good at, we get something stronger.”