So I was bored and wrote a rant about GenAI, and then fed it to ChatGPT-4o, and this was the output

So I was bored and irritated by incessant buzzing of news about “GenerativeAI” being this, that, and the other, and I wrote this rant, and then decided to feed it to ChatGPT-4o, and this is what I got.

Here’s my original rant (included here in full for convenience):


General AI, or Artificial General Intelligence, is NOT “just around the corner”. How do I know?

The control systems of actual intelligent beings (ie humans and animals, without getting involved in a discussion what constitutes “intelligence” and whether the animals are intelligent) all follow the same pattern.

  • Initiate action (output to the actuators)
  • Observe the result (input from the sensors)
  • Correct the model of the world according to the differences between expected and actual input from the sensors (as the results of the action)
  • Determine the desirability of the current state using built-in goal assessments (see hierarchy of needs, including need for air, water, food, etc)
  • Develop the plan to improve the current state to make it closer to the desired state as defined by the overall goals, using the model of the world developed over time
  • Rinse, repeat

Simplifying even further, there’s a feedback cycle where, after taking each action and observing the result, the model of the world is continuously adjusted to account for any discrepancies between expected and actual results.

In the world of artificial neural networks this is similar to training. You have a set of data in the form of inputs and outputs, you feed the inputs to the neural network, you run the inference to get the outputs, and you adjust the model weights so that the inference results are as close to the expected (actual) outputs as possible. The training, due to technological limitations, is currently done at a pretty significant expense (time + hardware costs + operating costs).

Please note that you have a training set of data, which is what you use to adjust the model (calculate the weights). This data is curated (read: subjectively selected by any number of humans).

Once the training is complete, you start using the model of the world to produce new output from the new input, boldly assuming (desperately hoping?) that the underlying rules of the world are not that far off from what was deduced from the training data.

You see the difference? In actual intelligent beings the world model training feedback cycle is happening continuously, allowing them to adjust to changing circumstances or encountering other parts of the world with significantly different underlying rules. In addition, the goal assessment system provides immediate feedback as to the desired state of the system, guiding its behavior.

These components are all but absent from the current implementation of generative AI.

So until we have a shot at building this continuous training feedback in real time, with the behavior guided by the goals, we might as well consider “generative AI” to be a bit of a parlor trick. Good for pretending to make sense, not good enough for anything else.


And this was the output when I fed it to ChatGPT-4o with a prompt to the effect of “I want you to read an article and tell me your opinion about it”:


The author makes some compelling arguments, particularly about the limitations of current generative AI and how far it is from achieving Artificial General Intelligence (AGI). I’ll break this down into key points, along with some thoughts:


1. The Feedback Cycle in Humans vs. AI

The author points out that humans (and animals) operate on a continuous feedback loop:

  • Action → Observation → Adjustment → Goal Evaluation → Planning.

This is indeed what allows biological systems to respond dynamically to constantly changing circumstances. Our ability to adapt our “world model” in real-time makes us robust and flexible in unpredictable environments.

Current generative AI models, however, lack this real-time adaptability. Neural networks are trained on static datasets (input-output pairs), and their weights are frozen post-training when used for inference. This inability to “train continuously on the fly” is a stark contrast to how humans operate.


2. Static Training Data

The argument that AI training data is curated and therefore subjective is valid. Humans pick and filter data, injecting bias—intentionally or not—into the training process. While biological systems learn through first-hand experience of the environment, current AIs rely heavily on vast, pre-selected datasets.

The key difference:

  • Humans adjust continuously through direct interaction.
  • AI models are static approximations of the world, fixed until retrained.

In this regard, the author is correct: today’s generative AI lacks the dynamic adaptability of true intelligence.


3. Goal Assessment & Self-Guided Behavior

Humans have built-in goal evaluation systems driven by instincts (e.g., survival, reproduction, self-fulfillment). These goals influence every action we take. Current AI lacks this:

  • AI does not have intrinsic goals or desires.
  • It operates under external guidance, defined by training objectives (e.g., “minimize prediction error”).

The absence of intrinsic motivation limits AI’s ability to emulate the type of self-directed behavior we see in animals or humans.


4. The “Parlor Trick” Argument

The author’s frustration about calling generative AI “intelligence” resonates with many critics. Generative models like GPT-4 excel at producing plausible outputs based on patterns from training data, but this is not the same as understanding or reasoning in the way biological systems do.

Generative AI excels in tasks where patterns can be statistically identified and reproduced. However:

  • It doesn’t understand causality, goals, or states of the world.
  • It can’t autonomously develop strategies or adapt in real-time to novel, unseen environments.

Calling it a “parlor trick” is a bit harsh, as generative AI has demonstrable utility in specific domains (e.g., creative tasks, coding assistance). However, the gap between generative AI and AGI remains vast.


5. Conclusion: Is AGI “Just Around the Corner”?

The author makes a strong case that AGI is not imminent. Key limitations include:

  1. Static models that cannot continuously retrain in real time.
  2. Lack of a goal-oriented feedback system that dynamically adjusts behavior.
  3. Dependence on curated training data instead of first-hand experiential learning.

To achieve AGI, breakthroughs will be needed in areas like:

  • Real-time online learning (continuously updating models).
  • Intrinsic goal-setting systems that enable self-directed behavior.
  • Improved models of causality, reasoning, and adaptability.

Final Thoughts

The author’s perspective is grounded in real challenges faced by current AI. While generative AI is a powerful tool, it is still far from the kind of dynamic, goal-driven intelligence seen in biological systems. Dismissing generative AI entirely as a “parlor trick” may understate its practical value, but the argument that AGI is not “around the corner” is well-founded.

In short: we’re closer to clever statistical predictions than to genuine intelligence.


Well… It DOES create an impression of being produced by an intelligent being, other than a few Freudian – or should I say, Turing? – slips (can you find them?). But as this was my conclusion (“pretends to make sense”), I stand by my arguments.

Now if you’ll excuse me, I have to go, my resume will not rewrite itself (but ChatGPT will!).

Leave a Comment

Scroll to Top