Categories
Business

Ethics in AI

AI is a powerful and disruptive technology altering the landscape of application development and the wider world as we know it. The adoption of AI is increasing at a fast pace. While AI helps developers in every area of society to create solutions, implement change, and drive progress, it also forces us to think more deeply about our relationship with technology and the ethics of AI.

AI is a powerful and disruptive technology altering the landscape of application development and the wider world as we know it. The adoption of AI is increasing at a fast pace. While AI helps developers in every area of society to create solutions, implement change, and drive progress, it also forces us to think more deeply about our relationship with technology and the ethics of AI.

 

Indeed, adoption and availability of tools to build AI have caught up with the promises of the field and what once seemed unachievable is now within reach. As a result, many people are concerned and are actively discussing the implications of AI and to what standard we must hold ourselves in order to ensure that AI is aligned with our widely shared human values.

WHERE DO DEVELOPERS STAND ON ETHICS IN AI?

Their views are surely of the utmost importance because they are, after all, on the front line of building and implementing the algorithms that underlie AI products. In the 16th edition of our Developer Economics survey, we asked developers to what degree they agree or disagree with issues such as AI’s unintended consequences, algorithm bias, and jobs replacement, as well as their views about data collection and protection.

Ethics-in-AI-graph

 

WE GOT THE BASICS RIGHT

It should give us peace of mind to know that the vast majority of developers take user rights very seriously. Developers agree that they should not only ask for user consent to collect data and follow security and data protection laws but that they should also go above and beyond legal requirements – 72% of developers told us so. Scandals such as the Facebook/Cambridge Analytica one have indicated that regulations are lagging behind and it is very encouraging that developers are aware of their ethical responsibility while regulators are still trying to catch up.

When it comes to AI specifically, however, developers have diverging opinions on a range of topics.

 

CAN AI BE TAUGHT TO BEHAVE AS THOUGH MORAL & HUMAN-FRIENDLY?

No topic divides developers more than the unintended consequences of AI. When asked whether AI can be taught to behave as though moral and human-friendly, developers’ responses split almost equally among those who agree (33%), those who neither agree nor disagree (40%), and those who disagree (27%). While such distribution of opinion could be expected from the general population, one might expect developers to have a more unified view as they possess a better technical understanding of what ML/AI can and cannot achieve.

Looking at the breakdown of developers’ opinions by age group we find that individuals who are under 25 years old have a much more positive outlook (45% agree) than those who are over 35 years old (28%). Where developers live is another differentiator: Europeans are more neutral (42% neither agree nor disagree) whereas South-Asia has the highest percentage of developers who agree that AI can be taught to behave as moral and human- friendly (49%). These differences may be the result of the type of involvement in ML/AI as developers in South-Asia are more likely to be using ML for medical diagnosis and prognosis, object recognition/image classification and NLP (Natural Language Processing), whereas Europeans are more likely to be working in more ‘traditional’ ML fields such as fraud detection.

Responses of ML/AI developers and data scientists also differ when considering their types of involvement (as professionals, hobbyists or students) and their use cases. Half of developers who teach AI, ML or data science have favourable views towards the ability of AI to behave in a moral and human- friendly way – in fact, teachers are twice as likely to strongly agree compared to all developers involved in ML/AI. On the other hand, developers who build machine learning frameworks are more likely to strongly disagree (12% vs. 8% for all developers).

Another very interesting insight is that more than half (56%) of ML developers who work in bioengineering and/or bioinformatics agree that AI can be taught to behave morally and be human-friendly. This is worth noting as these developers develop ML/AI that applies engineering principles of design and analysis to biological systems and therefore are likely to have a deeper understanding of the feasibility of such a lofty goal.

A burning question is “Will AI steal your job?”

Discover the answer and more details on the Ethics in AI, on our State of the Developer Nation 16th Edition report.

It’s free and full of insights.

Leave a Reply