General AI, or Artificial General Intelligence, is NOT “just around the corner”. How do I know?
The control systems of actual intelligent beings (ie humans and animals, without getting involved in a discussion what constitutes “intelligence” and whether the animals are intelligent) all follow the same pattern.
- Initiate action (output to the actuators)
- Observe the result (input from the sensors)
- Correct the model of the world according to the differences between expected and actual input from the sensors (as the results of the action)
- Determine the desirability of the current state using built-in goal assessments (see hierarchy of needs, including need for air, water, food, etc)
- Develop the plan to improve the current state to make it closer to the desired state as defined by the overall goals, using the model of the world developed over time
- Rinse, repeat
Simplifying even further, there’s a feedback cycle where, after taking each action and observing the result, the model of the world is continuously adjusted to account for any discrepancies between expected and actual results.
In the world of artificial neural networks this is similar to training. You have a set of data in the form of inputs and outputs, you feed the inputs to the neural network, you run the inference to get the outputs, and you adjust the model weights so that the inference results are as close to the expected (actual) outputs as possible. The training, due to technological limitations, is currently done at a pretty significant expense (time + hardware costs + operating costs).
Please note that you have a training set of data, which is what you use to adjust the model (calculate the weights). This data is curated (read: subjectively selected by any number of humans).
Once the training is complete, you start using the model of the world to produce new output from the new input, boldly assuming (desperately hoping?) that the underlying rules of the world are not that far off from what was deduced from the training data.
You see the difference? In actual intelligent beings the world model training feedback cycle is happening continuously, allowing them to adjust to changing circumstances or encountering other parts of the world with significantly different underlying rules. In addition, the goal assessment system provides immediate feedback as to the desired state of the system, guiding its behavior.
These components are all but absent from the current implementation of generative AI.
So until we have a shot at building this continuous training feedback in real time, with the behavior guided by the goals, we might as well consider “generative AI”1 to be a bit of a parlor trick. Good for pretending to make sense, not good enough for anything else.
- Which is just a neural network. That’s all it is. They were proposed like 100 years ago and implemented more than 40 years ago , so can we cut the crap and stop calling it “Generative Artificial Intelligence” please? Pretty Please? ↩︎