The shortcomings of LLM indicate their differences from human intelligence

 

The shortcomings of modern language models (transformers) are very important. They show what exactly is the difference between human intelligence and what we managed to implement in LLM and CNN. Today, 4 important differences are visible:

 

- We clearly do not learn like transformers, we only need to study a textbook on the mathematics section to start applying it according to the rules set out in the textbook. Neural networks need millions of examples.

- We create new concepts even if such examples were not in the training examples. Neural networks have not made a single discovery yet

- We critically evaluate each of our steps, we check each word for compliance with our other knowledge. We have "built-in" common sense

- We can follow a rule from the first time it is presented without examples. We can generalize (make an inductive conclusion) from one example and generalize at a very abstract level.