I propose a double standard.

If you’re going to put the word “cognitive” in front of the word “model” that model had better be good at something.

Before machine learning got all good at actually learning stuff, a person making a model of smart thinking had a meager hope. Can I make my model account for accuracy and reaction time in this task. Or maybe this family of tasks? Maybe this neural data too?

We know computers can do better now. They solve hard problems. ML has raised its bar. Our standard for cognitive modeling must raise too.

It’s a brute fact. If we’re to understand cognition, we must make models capable of succeeding at complex computations. Doing something like real human (animal) cognition does easily on a daily basis.

The double standard I’m saying is this. A good cognitive model must do things:

  1. Account for human (animal, neural) behavior and
  2. Approach cutting edge performance on an appropriate ML data set.

 

 

Advertisements