Given that we a now are supposed to have reasoning models, are there models that can, out of the box or be trained to, reason in a specific style or way? In the psychological literature and in philosophy (especially Hume and/or Kant), one usually draw a distinction between fundamentally 2 different types of reasoning, motivated/instrumental/hypothetical reasoning, versus categorical or value reasoning, or but I can't seem to find models that are trained differently, to uphold and abide by these deep conceptual distinctions. I personally don't want a model to do motivated reasoning for example, even if i tell it to by accident. Furthermore, here i am talking about how the model functions, not in what it can output, so if a big forward pass on latent...
Given that we a now are supposed to have reasoning models, are there models that can, out of the box or be trained to, reason in a specific style or way? In the psychological literature and in philosophy (especially Hume and/or Kant), one usually draw a distinction between fundamentally 2 different types of reasoning, motivated/instrumental/hypothetical reasoning, versus categorical or value reasoning, or but I can't seem to find models that are trained differently, to uphold and abide by these deep conceptual distinctions. I personally don't want a model to do motivated reasoning for example, even if i tell it to by accident. Furthermore, here i am talking about how the model functions, not in what it can output, so if a big forward pass on latent generation space is done, we can't tell if it is truly reasoning in one way or another. Or can training by RL only produce motivated reasoning by definition?