Very proud of our new preprint in which we model the impact of retinal and non-retinal inputs on dLGN activity. We find that a subpopulation of poorly visually responsive neurons profits most from accounting of non-retinal inputs in our model. In addition, our model uncovered that CT feedback is most effective in the absence of a patterned visual stimulus, Finally, stimulus information can be better decoded during suppression of CT feedback. We discuss how these findings can be embedded into current views on the role of CT feedback in stimulus processing.