We start a project to solve one or multiple problems. To make something we need.
We faced, face and will face climatic challenges,
health diseases,
economic crises and
political instabilities.
We need sustainable growth and increased capacities in research and economical strategies.
Multiple limits have been pushed back by energy and resources.
We wouldn't have the society we have without iron, oil,
uranium, etc.
Last century seemed to be limitless.
This century seems to be limited.
We see limits on growth,
limits on resources, on their exploitation
and climatic limits.
We need to outsmart these limits.
We need specific approaches, for each limit, but we also need global approaches, to help push every limit.
We also know that more intelligence helped us do what we wanted to do.
Artificial General Intelligence
(AGI) is a software with all intellectual capacities at least at the level of a human being.
Recent advances in AI produce very powerful algorithms, beyond human capabilities,
but which are extremely efficient on one specific task.
They could be named "Artificial Specialized Intelligences".
If our sustainability is bounded by physical limits, we need to expand our
intellectual capacities.
As human beings, we are limited by the
duration of our studies, ageing, fatigue, limited brain etc.
The world is limited, but Humans also have physical limits.
Softwares do have physical limits, but these limits are evolving and getting corrected
faster than
ours.
Hardware is getting better, we can optimize
computations,
multiply computational power by the number of devices etc.
We know that being as smart as a human being is possible, because we exist. Though we don't know if a software can be. Making this software would already have some applications, depending on the hardware cost to run it. But what matters the most is the possibilities it could open.
Ok, we make one AGI, and then what? An AGI is at least as smart as a human being.
Human beings are great to push limits and solve problems, but having one more human being won't do much.
Now, what's going to happen?
Software is duplicable for free, so we only need more hardware.
If this AGI run at our speed on one unit of hardware, maybe it'll run
twice as fast on two units?
We don't know if anything can outsmart the smartest human being, as it has never been done.
AGI isn't new. It's an old song. Deepmind was founded in 2010. OpenAI in 2015. These are just the most famous ones because they're supported by big structures. But before that, Turing already theorized artificial intelligence in 1950, and before him there's a long road of little steps toward that goal. Did anything change at all? How would we do it now as no one did it before?
As the means seem to have never been that developed, if it's feasible now, it should be done quickly.
Some companies and research labs have been working on it
for ten years and more,
and they've made a lot of progress.
They did achieve things that seemed impossible.
Theoretical or practical advances to progress in virtual worlds
need to be complemented by approaches that evolve within the full range of real-world possibilities.
If it's feasible now, we think it'll be developed in less than 5 years.
Otherwise, we probably lacked big chunks of theory in cognitive sciences, or computational power.
Every project may fail.
If we learn that it's impossible to do it with deep learning, it would be a success.
If we advance research, just like others did, it will be a success.
But the purity of research isn't always an end in itself. On the path to Artificial General Intelligence,
we are sure we can provide real world usage for standard Deep Learning (DL) algorithms.
As our goal is to build AGI from DL, we couldn't do it without
making more standard algorithms for every business and people's needs.
It's a 100% success rate from this point of view.
Of course, as AGI has never been done, and already tried, it's a 0% empirical success rate.
We're pretty much in between.