In what is called the predictive processing framework, the brain is viewed as a multi-layered prediction engine, whose task is to anticipate incoming flows of sensory information. Each layer of the engine is seen to express a generative model, in an arrangement that involves higher layers sending predictions to lower layers, and lower layers passing prediction errors upward. Minimizing these errors is assumed to turn the structure into a largely veridical model of the world. The scheme is advocated as a way of explaining processing in the brain. But what is its status from the computational point of view? What calculations are implied? Over what data do they operate? What effects are achieved? This paper considers predictive processing from a computational/engineering perspective, and identifies a number of technical problems in the scheme. How these can be eliminated is also considered.