1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

What good is AI for Development Goals?

May 16, 2018

The United Nations may have a reputation for being a talking shop. But when it comes to artificial intelligence and the 2030 Sustainable Development Goals, it's trying to get ahead of the conversation.

https://p.dw.com/p/2xllV
Indonesien Rund hundert Masern-Tote in Papua  befürchtet
Image: Getty Images/AFP/Y. Muhammad

The United Nations' 2017 report on Progress towards the Sustainable Development Goals makes sorry reading. Even a random scanning of the document can make you wonder how long it will take for developing countries, home to some of the poorest people in the world, to become "developed." And, no, I don't mean that developing countries are dragging their heels — it's more likely that the rest of us are dragging ours.

Check this. On the goal to end hunger and achieve food security, based on statistics from 2016, the UN says about 793 million people globally are undernourished.

An estimated 155 million children under the age of 5 are stunted, which means they are too short for their age, probably as a result of chronic malnutrition.

And roughly 52 million children under the age of 5 suffer from wasting, a condition where kids weigh too little for their height, often because they are not getting enough to eat or because they have suffered some disease.   

Sure, the numbers do go down. But it's a long list of global "challenges" — to use safe speak.

Indien Denge-Profilaxe in Dehli
Prevention measures like fumigation can help stop dengue fever spread. AI could help predict outbreaksImage: picture alliance/dpa/Str.

So how long will it take? There's no answer to that. But at the second "AI for Good Global Summit" this week in Geneva, humanitarian experts and others from the field of artificial intelligence come together to see if they can speed things up with practical uses of artificial intelligence (AI).

Confidence in code

Take Kimetrica, a social enterprise with bases in the US, Ethiopia and Kenya. They have been working on a machine learning tool to detect malnutrition using photos. The project is called MERON — Method for Extremely Rapid Observation of Nutritional Status.  

"If MERON works, we could increase the cost-effectiveness of the diagnoses as compared to traditional methods," says Anita Shah, managing director at Kimetrica Kenya. "It could be more accurate. It's less intrusive, and it can be used in low-resource environments, where it's not possible to send in an army of people with bulky equipment to take measurements, such as in conflict zones."

Read more: How deep fake porn is killing our trust in tech

It's a fascinating use of a technology that tends to come with so much baggage. Even computer scientists have their doubts about AI. They call it a black box technology, meaning that even AI experts don't fully understand the systems they build. And that's okay, because we've now come around to thinking, "We don't really need to understand these systems fully so long as the outcomes are good."

Part of me sees logic in that, especially if an AI or machine learning technology like MERON can help diagnose malnutrition faster in remote or dangerous areas — where doctors seldom roam — and help is provided to those who need it before they die.

Syrien Unterernährung
Detecting and treating malnutrition in conflict zones like Syria could be aided by an AI called MERONImage: Getty Images/AFP/A. Almohibany

"The thing is with the traditional measures — weight, height, MUAC, which is the mid-arm upper circumference — there can be errors, because it is so subjective and, in the end, it's so dependent on how good the person collecting the measurements is. So there's potential here to be more accurate and do things that human beings can't necessarily do perfectly," says Shah.

Kimetrica uses various forms of artificial intelligence. One is called a deep convolutional neural network. The company has used that to extract facial features from photographic images — 512 facial features. That data is combined with other health stats, or "anthropometric data" such as a person's weight, height, MUAC, their age, gender and ethnicity. All of that information is used to train the model to detect particular malnutrition categories — moderate, acute malnutrition or severe acute malnutrition.

It's a huge mass of data that could just as easily be misused by bad actors to foment political discontent — that has been done without AI to say "your state is failing, your people are starving," only now it could be done with apparently indisputable proof. That's one issue I see.

The other is that I worry about using developing nations as guinea pig populations for research we wouldn't do on ourselves. Which rich, developed community would allow thousands of images of its children to be taken and stored in a black box? I say very few. But then, I'm just a cynic. And, fortunately, Kenya seems to have some strict regulations in place.

"I see where you're coming from and I think the two safeguards are data protection and ethical clearance processes before you're allowed to do any research on human subjects," says Shah. "We had to go through an enormous ethical clearance process in Kenya, and we're not allowed to keep any of these photos after we have trained the model. We have to destroy everything."

From selfies to self-diagnoses

In other areas of health care, AI and humanitarian experts want to use the technology to predict disease outbreaks and help people perform self-diagnoses. But there are again unresolved issues.

"Developing countries potentially have the most to gain from AI, if it's done correctly, but also potentially the most to lose," says Frederic Werner, a member of the steering and outreach committee for the AI for Good Global Summit.

"In order for AI to work you need data, and for data you need mass digitization, and in order to have mass digitization you need connectivity, which comes down to a point where if you don't have those basics in place … unless there's a leapfrog in technology … those benefits won't be realized."  

Wait. There is progress. One project has used AI to try to predict outbreaks of dengue fever rather successfully. The researchers combined weather data with social media and housing data, "all kinds of data that would take humans years to analyze, but which an AI can do almost immediately," says Werner. They scored an 80 percent accuracy rate.

Deutschland CeBIT Messe in Hannover
Facial recognition AIs are deployed increasingly in the security domain. But can they help in health care?Image: DW/M. L. Moraleda

Then there's another use for facial recognition — taking photos of your mouth to self-diagnose oral cancer. Can you see people doing that? I can't. If the SDGs also aim to improve human connectivity, in the digital and analog senses of the word, wouldn't you first want to improve human interaction? Send people in? 

Well, yes … and no.

"We need to find AI solutions to address these problems, problems in the poorest parts of the world, and not naively rely on an app developed in Silicon Valley that might help someone in Africa, but that person has trouble getting electricity," Werner says. 

So it's starting to look like a case of use AI before it uses you. But the issue of connectivity is a fundamental one.

Read more: Would a global cyber ethics commission help 'counter the lies' of the tech lobby?

To return to the UN's progress report, it says that in 2016 "85 percent of people in the least developed countries were covered by mobile-cellular signal." Then read on, because in terms of fixed-broadband services only 40 percent in developing regions are online, and internet user penetration is 31 percent lower for women than for men. All this will need to be fixed if AI is to make any impact among the poorest nations, because surely control must rest with them, and they can only have digital control with connectivity. 

Computers don't judge. Or do they? 

The AI for Good Global Summit is not only about the nuts-and-bolts of technology, though. It's also about discussing the ethics of using AI. So a lot of the talk is around making AI inclusive, transparent, and reducing bias.

Kenia Digitales Leben
Urban mobile connectivity in middle income countries like Kenya is improving. Progress is slower in remote regions.Image: Simon Maina/AFP/GettyImages

Look at Amazon's Alexa or Google's Home Assistant. They use female voices, and they are designed to help you with very simple tasks, like switching on a light or doing the shopping, which "displays stereotypical behavior," says Kriti Sharma, vice president for artificial intelligence at a UK-based company called Sage.

"You get male AI too, like IBM Watson or Salesforce Einstein, and they are designed to make important business decisions. This is the bias that is coming into the algorithm either in the form of data or design."

Sustainable Development Goals: Zero Hunger

Sharma has a keen eye on the ethics of AI, but she is also developing an artificial intelligence that aims to act as a companion for women who have experienced domestic violence. It's called rAInbow. It's being developed with Soul City, an institute for social justice in Johannesburg. And they plan to launch it in South Africa later this year. 

"The key problem is that many communities often get 'unserved' with technology revolutions, and women is one of those groups," says Sharma. "In South Africa, we found that one in three women faces abuse, yet only one in 25 cases actually get reported, and it really bothered me. I spent a lot of time researching, and asking why they weren't reporting or getting help."

On the ground, the Sage team found that humans might be the problem.

"Number one, women felt there was a lot of social stigma associated with abuse and harassment. It's not easy for someone going through that to ask for help, even within their own families. And when we did the trials with early versions of rAInbow, the feedback we got was that 'machines don't judge me, but humans do.'"

Argentinien Protest gegen häusliche Gewalt
The words read "silence kills." Many women who have experienced domestic violence find it hard to talk about it to humansImage: Reuters/M. Brindicci

They felt more comfortable asking for advice from "an entity" that didn't judge or get frustrated ("Why haven't you got help yet?!").

"And people who experience domestic abuse often don't relate to terms like domestic violence, or sexual harassment. That's not what they think happened to them. They think, 'Oh, maybe my partner is too controlling.' So finding information today is difficult because they're not searching on Google for "domestic abuse." But a bot can change the tone and have the conversation differently," Sharma says.

Altered mindsets

Doing things differently is a basic requirement for AI. If you want to use AI, you will have to think and behave differently, accepting its cold calculus. It is something that strikes me with fear — quite possibly irrational fear — because I enjoy the grey zones inherent in human thinking. It's what gives life context and meaning. But an AI can't give you context. It doesn't judge, remember. 

"So I get the fear factor. But computers have been training people even before AI. If you think about Google Search, we have trained ourselves to focus on keywords. We don't ask questions like we normally would, we just think what are the right keywords," says Neil Sahota, World Wide Business Development Leader at IBM Watson.

IBM / Supercomputer Watson
So how ready are you to place your trust in this black (and blue) box called Watson? Image: AP

This, however, is what Sahota calls the "underlying challenge — it is a very different mindset."

AI is a tool, he says. It will help us discover, predict and simulate scenarios. But we will have to change to get the most out of it.

"We're going to have to break our old way of thinking and embrace these new capabilities before we can unlock the real value. The problem is we can't wrap our heads around how to do this. It's not like 'here's a book' or 'here's an article' and everything's going to click into place," says Sahota.

"I want to try to drive this change in the mindset and the best way for some people is to just see some tangible things that have been done," says Sahota. "When people talk about AI and the things they can do, there is a lot of focus on commercialization. But there is a lot of benefit for social good too. So I hope the summit inspires people to think about social good. And there is nothing wrong with trying to do both at the same time."

DW Zulfikar Abbany
Zulfikar Abbany Senior editor fascinated by space, AI and the mind, and how science touches people