I'm ok with rational biases, being aware and resistant to them etc.
However, at a more basic level, you can't call things rational if they don't have an actual objective to maximize. Machines and humans alike are pure processes. If we consider a function that takes the state of a human as argument then we can unequivocally determine what will that human do. We don't need to assign to that human a goal to describe his actions. That is true regardless of whether the actions taken are called "rational" or not by an observer.