Objective: Consciousness is often thought to be that aspect of mind that is least amenable to being understood or replicated by artificial intelligence (Al). The first-personal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of Al method. Since Al is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is even more doomed than Al supposedly is. The objective of this paper is to evaluate the soundness of this inference. Methods: The results are achieved by means of conceptual analysis and argumentation. Results and conclusions: It is shown that pessimism concerning the theoretical possibility of artificial consciousness is unfounded, based as it is on misunderstandings of Al, and a lack of awareness of the possible roles Al might play in accounting for or reproducing consciousness. This is done by making some foundational distinctions relevant to AC, and using them to show that some common reasons given for AC scepticism do not touch some of the (usually neglected) possibilities for AC, such as prosthetic, discriminative, practically necessary, and lagom (necessary-but-not-sufficient) AC. Along the way three strands of the author's work in AC - interactive empiricism, synthetic phenomenology, and ontologically conservative heterophenomenology - are used to illustrate and motivate the distinctions and the defences of AC they make possible.