Of the many areas of AI research, artificial general intelligence has always been the most difficult. Think of artificial general intelligence (AGI) as a machine that can understand or learn any intellectual task done by human beings. Though once the stuff of sci-fi novels, we’re seeing more research into both AGI and artificial life as AI technology plays a larger and more varied role in modern life.
One company making strides in this area of research is GoodAI, who have dedicated themselves to “developing safe AGI to help humanity and understand the universe”. The company was founded by Marek Rosa, who made his start in video games as the CEO and founder of Keen Software House, and now leads both KSH and GoodAI. Their recently released Badger Architecture is a promising step towards understanding and developing agents that can adapt to new tasks.
In this interview, we talk to Marek about the current challenges facing AGI, how to create algorithms that learn, and what targets GoodAI has set for the future.
Can you tell me about your path into the research and development of AI systems? How did GoodAI start?
When I was young I was fascinated by sci-fi and technology and was convinced that strong AI or human-level AI could have a profound impact on humanity and society. I realized that we cannot improve machines to a perfect state on our own, but if we create a machine that can improve itself recursively, it can achieve a much higher intelligence level and solve problems of any scale.
However, I was young at the time and the technology was not near the level it is now, so I began programming to help build the skills needed for AI research and to earn enough money to launch my own AI project. This is when I founded my gaming company Keen Software House. With the success of our game Space Engineers, I was able to launch GoodAI in 2014.
AGI is such a fascinating, difficult problem to approach. From your point of view, what are the current challenges facing the development and implementation of AGI?
For us and our Badger Architecture, one of the major challenges we are looking at is that of gradual learning, or learning to learn. A key question for us is how we can prove that the learning of or adaptation to a novel task is happening inside the inner loop, and not due to the outer loop. A way to envision this is as if the outer loop is like evolution or genetic code which changes through generations, and the inner loop is like learning during lifetime. The outer loop supplies us with a certain set of skills, and these skills allow us to learn during our life, in the inner loop.
Another question is how to prove that learned learning capabilities can expand in an open-ended manner, thus allowing the possibility of recursive self-improvement. We also want to prove that the learned learning policy is not overfitted to training tasks and can generalize to novel testing tasks.
These are just a few of the research questions we are focusing on. We actually recently launched the GoodAI Grants initiative, a $300,000 fund aimed at answering these crucial research questions.
Has GoodAI set targets for itself with regards to achieving progress towards AGI? Can you share those with us?
We want to make advances towards resolving the topic of inner loop learning. One way to tackle it is by developing a learning algorithm that is able to rely on past experiences to execute more efficiently in the future. Significant attention has been given to this topic in the AI community in the past already, and the whole research area of transfer learning revolves around it.
Richard Sutton observed that the greatest advances in AI came when researchers switched from hand-engineering algorithms and features to learning them from large datasets, relying on an increasing amount of computation. Similarly, we want the (self-)improving learning algorithm to be learned rather than hand-engineered. As this is a monumental task, it helps if it can be split up into pieces, which are detailed below:
- One of our milestones focuses on the pre-processing of input data into meaningful chunks so the whole training doesn’t have to be end-to-end.
- We believe a multi-expert weight-sharing scheme is a crucial point that drives the efficiency of the search for a learning algorithm, so another milestone is a better understanding and exploitation of multi-agent communication and cooperation.
- Another milestone focuses on the discovery of learning algorithms within the multi-expert setting mentioned above. We’ve also had some interesting results that rely on classical learning algorithms applied in a novel way.
- We have an ongoing debate about how to better grasp and understand the question of what it means to be learning and what constitutes the difference between learning and inference*. We probably will not arrive at an answer that has the level of validity of mathematical proof, but it should allow us to better understand which aspects of our research are more important than others and thus allow us to focus better.
* In machine learning, inference can be thought of as choosing a configuration based on a single input. Learning is the process of, given a set of samples from the model, identifying the model parameters (or a distribution over model parameters) that best fit the given data.
With the release of the Badger Architecture, you mentioned that you need more people working on it to develop and test hypotheses. Though it is still very early days, are there any particular hypotheses or areas of research that you are excited or interested about?
As mentioned above, proof of gradual learning and proof of learning in the inner loop is very important to us, but we also have many other research questions and areas of interest. We recently ran the Meta-Learning & Multi-Agent Learning Workshop where we focused on some of the key research areas including multi-agent reinforcement learning, learned communication, graph neural networks, modular (meta-learning), open-ended and learned optimizers. These, among other research areas, are what we find the key to developing the Badger Architecture and therefore creating AGI.
I noticed that GoodAI Solutions helps to develop AI solutions that tackle problems in industry and business, too. Have you noticed any trends or patterns in this area of AI application?
At the moment the only commercial application we are working on is our AI game, where users will be able to train AIs themselves through gameplay. There are many connections between gaming and AI. Gaming can work as an ideal testing ground for AI where we don’t need to worry about releasing prototypes into the real world but rather in a controlled game environment.
We used to have an applied division where we completed many successful projects in automotive, manufacturing, energy, and many other industries. However, I wanted to keep the focus of GoodAI purely on research and that division turned into a new company, GoodAI Solutions, which now operates separately.
It often seems as though the practical application of AI and the research of it walk different paths. GoodAI seems to balance both of them at the same time. Do you see these two spheres as interconnected? How do you navigate them at the same time?
I think that they can be interconnected when there is a good connection. For example, gaming is a very logical connection for us. I founded Keen Software House and we have a wealth of experience in the gaming industry. When our Badger Architecture gets to the point that it is applicable to businesses we will also create commercial opportunities from that.
However, I want to make sure that the business applications link as closely as possible to the research and don’t distract from it as that is the fundamental goal.
Outside of business applications, have you noticed any other trends or patterns within the field of AI?
Some areas that are gaining a lot of traction are meta-learning & multi-agent learning, and lifelong learning. Our recent meta-learning and multi-agent learning workshop attracted a huge amount of interest and speakers from Google Brain, DeepMind, OpenAI, University of Oxford, Stanford University, MIT, and many more. I think the idea of AGI in general used to be seen as some sort of sci-fi thinking with only a small amount of interest in it. However, we’re seeing it become much more mainstream now, with large companies and prestigious universities also jumping on board.
You’ve made some incredible progress in the five years since GoodAI was announced as a company. What do you have planned for the next five?
In the next five years, we aim to develop our Badger Architecture further. It took us time to get to where we are today and we feel we are on the right path. Our focus is on collaborating with some great minds, universities, and companies to try and solve some of the challenges mentioned before. Our GoodAI Grants initiative is a starting point; it was launched only last month but we have already had some exciting applications and some potentially important collaborations. I cannot talk about exactly who we will be collaborating with yet, because none of the contracts are signed, but it is promising to see such interest. The full list of questions we are working on is on our website but I think our key focus is solving how to make sure our agents are performing learning and not inference.
If you’d like to keep up to date with GoodAI, be sure to check out their official website for regular updates. If you’d like to read more interviews with experts and professionals in the field of AI and machine learning, be sure to check out the related resources section below, and sign up to the Lionbridge AI newsletter for interviews and news delivered directly to your inbox.