
October 16 was World Food Day, a day that aims to demonstrate that there are still many people in the world suffering from hunger and malnutrition: A recent UN report revealed that there are more than 800 million hungry people in the world who cannot afford to eat every day. That is why organizations such as the World Bank, the UN, Google, and the Red Cross created the initiative “Famine Action Mechanism” (FAM), which uses artificial intelligence and machine learning techniques to combat world hunger.
The objective of FAM is to use artificial intelligence and big data to predict the famines that will occur in the world sufficiently in advance to be able to provide the resources necessary to alleviate the effects of those famines.
The artificial intelligence systems that are used to predict what will occur, like the one used by FAM, are based on automated analyses of past situations, in order to extract patterns and learn models that allow the future to be forecasted. This task in which an information system learns patterns to predict future scenarios and events is known as machine learning.
The machines learn in ways very similar to humans: Based on past experience, we are able to discern what will likely happen in the future. For example, we know that if it’s going to rain tomorrow in Madrid, there will likely be more traffic on the road. But that is a very simple example with a single variable (rain). In complex problems with hundreds of variables, like the one at hand of forecasting famine in a specific area, there is a much greater number of factors involved that are also very complex.
What the Famine Action Mechanism will do is collect a mix of data – social, political, climate, economic, etc. – present in the world before a famine occurred and feed that into the artificial intelligence system so that it will be able to infer a model that will forecast similar situations in the future. All of this data processing is not simple. Since there are so many variables and cases, big data processing techniques are required to process the variables and carry out the calculations necessary for different machine learning algorithms. It should be noted that a problem like this one may require manipulating and analyzing terabytes of data, a task beyond the processing capacity of traditional computers. Computer clusters are used to solve this problem. This is a set of machines interconnected in order to distribute work and process data in a distributed and parallelized fashion. For instance, a big data analysis task using the Google infrastructure (with its BigQuery tool), uses a cluster with 2,000 computers working in parallel.
But how is a system that is so complex designed and implemented in order to predict famine risks in certain parts of world months in advance?
These systems use hundreds of millions of data from sensors and other sources of information on the different areas of the planet, such as weather stations, seismic analysis systems, satellite images, socio-economic data, and regions’ geophysical characteristics, to predict the probability that a conflict, or famine, will arise in a specific region of the planet. To be able to analyze such a large amount of data, big data systems that distribute the work between hundreds or thousands of computers are used to process the data in a distributed and parallelized fashion.
The steps that must be taken to build a machine learning project like the FAM are as follows:
Define the project’s objectives: What exactly do we want to predict with our project? In this phase, a number of questions are considered, such as whether we wish to build a model that predicts famine situations for specific countries or if we would like to make a more abstract model that would serve for any region of the planet. Here, for instance, we could define whether the objective is to predict famine in a specific region three months in advance (yes or no) or whether we would rather predict a poverty value for the region in three months. This part is very important because the initial questions and objectives will mark all future development and algorithms to be applied.
Gather and merge data: This phase clearly depends on the previous one, and it consists of gathering data from the local databases, weather and geophysical sensor data, macro and microeconomic data, satellite images, food and oil prices, areas with armed conflicts, etc. In this phase, the work done by the World Bank and the Red Cross is key as they will have to compile all of that data, ensuring that it is as complete and accurate as possible. This task is not at all easy, as in many cases the data deal with areas with limited to no technology and where the information may be biased by the political and social situations of the area.
In order to facilitate the data collection efforts for projects like the FAM, initiatives have arisen such as the Open Data for Resilience Initiative, which gathers and publishes open data from several countries at risk of some kind of catastrophe.
Data cleansing and transformation: This is perhaps the most critical and important phase in the entire machine learning project. The team of data engineers must ensure that the information gathered is transformed to eliminate white noise before using automated learning algorithms. In this phase, a number of tasks are carried out; they include handling incomplete information, processing outliers (observations distant from other observations), normalizing the data, managing unbalanced and biased data, calculating the predictive capacity of the attributes, and discarding any that do not add value to the objective. What is known as the extraction phase is also carried out here. It consists of generating new attributes based on current variables, e.g., adding a new variable such as the number of days without rain based on weather information or the width of the riverbed based on satellite images (which may be a good way of forecasting overflowing).
Create predictive model with machine learning: In this phase, the data that has already been processed will be used to feed an algorithm that analyzes millions of observations in order to find patterns and rules that will serve to predict what is going to occur in future situations. Depending on the objective set in the first phase, the machine learning techniques to be used will differ. In a system such as FAM in order to predict situations of famine, we are facing a kind of automated learning known as supervised learning and within this type we can develop a classification system or a regression system, depending on whether back in the first phase when the main objective was set, we said that we would like to predict a numeric value or a categorical one.
Given the characteristics of the FAM project, it will likely be a classification problem. There are several families of machine learning algorithms and techniques that can be used on classification tasks, and unfortunately there is no golden rule to indicate which algorithm will work best, the algorithm that works best for each problem may differ. In all machine learning projects several models are usually generated with different algorithms and they are all evaluated to see which one performs best. Some of the most commonly used prediction algorithms today are support vector machines, algorithms based on decision trees such as random forests or gradient boosted trees. For complex problems with a great deal of data (such as the one at hand), deep learning – the use of learning algorithms based on complex neural networks – usually works well.
Assessment and testing: Once several models are ready, they must be assessed to determine which is the best at predicting future observations. It should be noted that these artificial intelligence algorithms mentioned are not 100% accurate and always contain errors. This is important because, even if it is beyond the scope of this article, the issue of determining who is responsible for the errors committed by artificial intelligence systems is currently generating quite a bit of debate. In order to assess the different models, the initial data set is usually divided into two parts. One part is used to make the algorithms learn and generate predictive models, and the other one not used in the learning process is used to assess and determine the accuracy of the models (i.e., how often they are right and how often they are not about unknown situations).
EXAMPLE: the case of Guatemala.
Guatemala is a country very exposed to natural disasters such as earthquakes, fires, and volcanic activity, and it is also one of the poorest countries in Latin America, so it is a region in which natural disasters may cause famine.
In 1976, Guatemala suffered an earthquake that decimated the town of Los Amates and created famine and extreme poverty in the area. The earthquake caused 23,000 deaths, injured 76,000, and led the country’s GDP to drop by 20%.
Predicting an earthquake months in advance is tremendously complicated, but will we be able to analyze satellite images with machine learning in order to automatically identify areas where an earthquake would be particularly devastating?

The World Bank used satellite and drone images as well as 360º street photos to identify the homes with a high risk of collapsing during an earthquake. A manual inspection on the ground by engineers and architects would have had a price tag that would have made the inspection unfeasible. Nevertheless, these images were able to be analyzed with artificial intelligence at a much lower cost. This analysis was able to identify the buildings at a high risk of collapsing with an accuracy of 85%, allowing campaigns to be created to optimize the economic resources available in order to minimize the impact of disasters in the country, saving many lives.