Equally at national and the highest international levels, few issues in technology governance are more vexed than those around the precautionary principle. Often using colourful rhetoric – and frequently paying scant attention to the substantive form taken by precaution in any given setting, even ostensibly academic analyses accuse precautionary approaches of being ‘dangerous’, ‘arbitrary’, ‘capricious’ and ‘irrational’ – somehow serving indiscriminately to ‘stifle discovery’, ‘suppress innovation’ and foster an 'anti-technology’ climate. The widely advocated alternative is ‘science based’ risk assessment – under which single aggregated probabilities are assigned to supposedly definitively-characterised possibilities and asserted to offer sufficient representations of the many intractable dimensions of uncertainty, ambiguity and ignorance. The high economic and political stakes combine with their expediency to entrenched institutional and technological interests, to intensify these arguments. Amidst all the noise, it is easy to miss the more balanced, reasonable realities of precaution. By reference to a large literature on all sides of these debates, this paper shows how these pressures are not only misleading, but themselves seriously unscientific – leading to potentially grave vulnerabilities. Experience over more than a century in technology governance, shows that the dominant issues are not about calculation of probabilities, but about the effects of power in innovation and regulatory systems, the need for balanced consideration of alternative options, scrutinising claimed benefits as much as alleged risks and always being vigilant for the ever-present possibility of surprise. In this light, it is not rational to assert that incertitudes of many difficult kinds must always take the convenient forms susceptible to risk assessment. To invoke the name of science as a whole, in seeking to force such practices, is gravely undermining of science itself. And these pressures also seriously misrepresent the nature of innovation processes, in which the 2 branching evolutionary dynamic means that concerns over particular trajectories simply help to favour alternative innovation pathways. Precaution is about steering innovation, not blocking it. It is not necessarily about ‘banning’ anything, but simply taking the time and effort to gather deeper and more relevant information and consider wider options. Under conditions of incertitude to which risk assessment is – even under its own definition – quite simply inapplicable, precaution offers a means to build more robust understandings of the implications of divergent views of the world and more diverse possibilities for action. Of course, like risk assessment, precaution is sometimes implemented in mistaken or exaggerated ways. But the reason such a sensible, measured approach is the object of such intense general criticism, has more to do with the pervasive imprints of power in and around conventional regulatory processes, than it does with any intrinsic features of precaution itself. Whilst partisan lobbying is legitimate in a democracy as a way to advance narrow sectoral interests, it is unfortunate when such rhetorics seek spuriously to don the clothing of disinterested science and reason in the public interest. Taking the best of all approaches, this paper ends by outlining a general framework under which more rigorous and comprehensive precautionary forms of appraisal, can be reconciled with riskbased approaches under conditions where these remain applicable. A number of practical implications arise for innovation and regulatory policy alike, spanning many different sectors of emerging technologies. In the end, precaution is identified to be about escaping from technocratic capture under which sectoral interests use narrow risk assessment to force particular views of the world. What precaution offers to enable instead is more democratic choice under ever-present uncertainties, over the best directions to be taken by innovation in any given field.