AI and Sustainable Development Goals: An Interview with Recursive

Article by Hengtee Lim | December 23, 2020

Sustainable development goals (SDGs) are becoming more and more important for companies of all shapes and sizes. Put simply, SDGs are a collection of 17 interlinked goals designed to help companies achieve a more sustainable future. Set in 2015 by the UN General Assembly, these goals aim to support such efforts as making processes more efficient, reducing waste, creating diversity, and improving education. Artificial intelligence is one way that these sustainable development goals can be achieved, but leveraging the technology is no simple task.

To learn more about the use of machine learning and AI technology for SDGs, we talked to Tiago Ramalho, the founder of Recursive. Recursive is an AI consulting company built around implementing AI solutions to improve sustainability. In this interview, we talk about how AI can help enterprises achieve sustainable development goals and the current roadblocks that make it difficult to implement AI technology.   

 

Can you introduce us to Recursive and what you do there?

Recursive was founded in October of 2020. We’re a small startup consisting of me and my co-founder, Katsutoshi Yamada. We’re an AI consulting company that aims to provide cutting edge AI research and knowledge to enterprises lacking the technical know-how.

We’re based in Japan, where there’s a lot of enterprises starting to apply AI and stepping up to the next level. At the same time, these enterprises lack the talent to do so. There are already many AI consulting companies out there, so we distinguish Recursive by centering it around our passion for sustainability. [We started with the idea that] we didn’t want to do AI consulting just for the sake of generating revenue, we wanted to have a positive impact on society. So our focus is on AI, but specifically geared towards sustainability.

We provide consulting services to companies with aims to improve their sustainability in a concrete way. A good example is the UN’s framework for sustainable development goals, which many companies are adopting. Many companies are still trying to figure out what exactly they can do to contribute, because there’s a broad variety of goals and ways to achieve them. We aim to help companies discover the technological innovations that can help them achieve their goals, and the AI solutions they can implement to get there.

 

How did you make your start in data science and AI?

While I was doing a PhD in statistical physics I realized that I quite enjoyed programming. I was also doing a lot of statistical analysis to analyze biological systems, and it was at this time that deep learning was starting to get popular. In 2014, I started to really think about what I wanted to do after I finished my PhD.

I realized that the tools I was using were very similar to the tools being used in deep learning at the time. I randomly applied to Google’s DeepMind, which was still kind of in stealth, and because my skills were relevant I became a research engineer there. I was there for around three years, involved in a variety of different projects, and the company grew immensely in that time. Like from a hundred people to a thousand people. At the same time, it became a little too big for me, so I wanted to try something else.

I joined Cogent Labs as a research scientist, researching new products and publishing some papers. That was my introduction to developing products for actual customers, and I realized that there’s a lot of different challenges when actually developing an applied machine learning product.

I decided to use my experience to do something on my own, but wanted to define the kind of projects I worked on. When I realized I wanted to work more with AI and sustainability, that was when I decided to quit Cogent and start Recursive.

 

How will Recursive help companies achieve their sustainable development goals?

There are three main components that we focus on to help companies achieve their sustainable development goals: efficiency, research, and education. Efficiency refers to improving industrial processes, research means discovering innovative solutions, and education is about shortening the gap between what people know and what they need to know.

One concrete area we’re focusing on is biomedical research. We’re in discussions with companies that are interested in using AI to enhance their research. One key AI technology here is the simulation of drug discovery. With the recent announcement of the AlphaFold algorithm, we’re excited to look at ways to reproduce or enhance this work. AlphaFold is a huge breakthrough because it can actually simulate the correct folding of a protein from a genetic sequence, and many partners we’re talking to are very interested in the tech.

Another concrete example of improving efficiency to help with SDGs is in production, shipping, and logistics. 30% less energy consumption in shipping is a meaningful reduction, and graph neural networks are becoming a key technology in this area. Graphs can be used to represent logistical systems or factory assembly lines, which means we can run optimization to learn how to manage these systems optimally.

But for both examples, we have to work with our partners to make the systems smart. In many cases the inputs for a system are manual, so we have to work together to automate data input and data collection, and figure out how to best achieve our goals.

As for education, we want to focus on building machine learning skills here in Japan. We want to build a diverse team not only in terms of ethnicity and background, but also in terms of qualifications. I want to use my knowledge of the field to hire smart people even if they don’t necessarily have the pre-existing requirements many companies ask for. Ideally we’ll raise the next generation of machine learning researchers and engineers here in Japan from a diverse base who will go on and build their own teams. I think my approach here is a bit more human-oriented and not so much of a technology-based approach

 

Can AI hinder sustainable development goals?

There was a paper that actually analyzed the impact of AI in terms of sustainable development goals. What they found was that on balance it tends to have a positive impact. So the positive contributions of AI for each goal tends to be higher than the negative impacts. That said, there is a non-zero potential for negative impact as well, and I think it’s something we absolutely need to be mindful of, as with any technology.

When you develop a new technology, you have the potential to both cause harm and create good. This impact is up to who is developing the technology. At Recursive, because we’re going to be developing these technologies, we always need to keep in mind that what we do has real consequences.

One way to mitigate harm is with diversity. Missteps [in AI] often happen not because of malice but because of lack of knowledge or firsthand experience. More diversity of voices means people raising more questions about what you’re doing.

We also need to make sure that partnerships are not too narrow-minded or short term. Let’s say you make a contract for three months. You develop a solution [for the client] and then you move on to the next contract; you might not necessarily care about what happens next. But if we have long-term commitments, there’s more responsibility on each side for what we’re working towards.

So I think there’s a number of things we can do and it’s something we take very seriously. And so we want to make sure that when we develop this technology, it’s actually a positive contribution towards society that mitigates negative impacts.

 

Having gone from the field of research to practical application, what differences have you noticed when moving from one to the other?

There’s a significant difference, but they’re both hard jobs. It’s not like one is easier than the other, but what you’re optimizing for is very different.

When you’re doing research, what you’re optimizing for is, essentially, a question to which nobody knows the answer. Something nobody has even thought of before. [To answer this question] you need to come up with toy examples that illustrate the point you’re making.

For example you might say short-term memory is not good enough for neural networks, so you have to craft a dataset or environment that proves this point. But you might find after you’ve done all of that, when you apply a deep neural network it actually works. Now you understand that the problem may be different to what you first thought. So a lot of research goes into how to craft and discover these minimal toy examples that challenge what you’re looking for. In short, on the research side of things you’re developing and figuring out a specific challenge, then developing a model that will beat that challenge. But the model you develop will end up tailored to that particular task. This is why performance in research papers can be very different to performance in practical application; because the real world dataset wasn’t crafted in the same way.

In applied machine learning you have to limit your scope a little more. You can’t just develop a completely new method because it will take time and you may not be able to really narrow down the problem in a scientific way. More often you have to take methods that already exist and figure out the complications that make it hard to apply directly. A good example is test and train dataset shifts. Often with real world systems you’ll train on one dataset but the testing dataset will be completely different.

Another example is imbalanced data, and we can see that in medical applications that diagnose particular diseases. These are trained on datasets of images. For example, let’s say that most of your images are healthy (i.e. no diseases), a lot of your images are for disease A, fewer images are for disease B, even fewer for disease C, and the smallest sample of images is for disease D. A machine learning model will learn that disease D is highly improbable because that’s what it has to learn from, right? But actually when a doctor is using the model they need the model to diagnose disease D if there’s a picture of it. Humans don’t diagnose diseases based on the frequency of the disease; what they need is an accurate assessment based on the features in the image.

This is an example of the kind of challenge that is often not present in a research setting. To me, that’s what distinguishes applied machine learning from research. This is not to mention additional complexities like client budget, the need for a system to run in real time, and other engineering problems that are often not an issue in a research setting.

 

What roadblocks do you see for companies looking to implement new AI technologies?

The number one roadblock is the lack of understanding regarding what AI can and cannot do. On the one hand you have people that are overly pessimistic, and on the other you have people who think AI is magical and a fix-all.

[One problem I’ve noticed is that] if you don’t understand the technology behind a system it’s easy to assume it will make the same mistakes as a human would. But actually sometimes AI doesn’t make mistakes a human would make, but rather it makes mistakes a human would never make. This means when people evaluate AI, they sometimes get emotionally upset when an AI is 99% accurate, but the 1% mistakes are completely nonsensical. Even when, for example, the performance of a real human is only correct 98% of the time, some clients prefer not to deploy AI because of those mistakes.

Having said that, of course we need to be mindful of the worst case scenarios, because in some cases an AI failure can be catastrophic. Humans are less likely to make catastrophic failures because they have auxiliary knowledge that we call common sense. We can imagine situations we’ve never seen before, whereas an AI might make a mistake but continue to carry on as normal, and then your boat crashes into an iceberg or something.

One other big issue is data literacy. Often people don’t understand how much or how little data is necessary to train an algorithm, because right now algorithms are very data hungry. Sometimes people assume that a system should learn from a single example because they don’t understand the system learns from a database of millions of examples. Other times you have people that have been exposed to AI, but they assume you need terabytes of data when in some cases you don’t.

So I feel like a lot of my job is education about what AI can and cannot do, or the details of a specific problem. At the same time, that education goes both ways; I need the client to educate me on the exact nature of their problem. Because if the client wants to make things more efficient, I need to know what the inputs and outputs are, and what we’re optimizing for. So success relies on bi-directional communication.

 

Do you have any plans for Recursive in 2021?

I think the goal for 2021 is to have one major partner with whom we can develop a project we can deliver on. For example, a significant improvement to their sustainability, be it by improving energy efficiency, helping research efforts, or achieving a technological breakthrough.

We’re not expecting to do something to the level of AlphaFold, but if we could help a research lab to make a new discovery using AI technology, that would be a real justification for the time we’ve spent. We don’t have ambitions to do a huge IPO or something like that, we’d like to keep it privately owned, because the goal is to contribute to sustainability by driving research that will help. Once you have too many investors, the metrics can become too money-driven. When that happens, we lose the flexibility that I wanted when I started the company with my co-founder.


 

You can learn more about Recursive at their official website. For more interviews with and perspectives on the future of AI and machine learning, be sure to check the related resources below and subscribe to our newsletter here.  

Subscribe to our newsletter for for more news and interviews:
The Author
Hengtee Lim

Hengtee is a writer with the Lionbridge marketing team. An Australian who now calls Tokyo home, you will often find him crafting short stories in cafes and coffee shops around the city.

    Welcome!

    Sign up to our newsletter for fresh developments from the world of training data. Lionbridge brings you interviews with industry experts, dataset collections and more.