I. What is Non-Reduction?
Non-reductive physicalism says that everything is exhaustively describable by physics, and the human mind is entirely explainable by physics, but that we humans have special explanatory interests that cannot be neatly explained in physical terms. To take an abstract example, something might be entirely describable in terms of physics, and the physical properties can be stated: P1, P2, P3, P4, P5, P6, etc. But because of our human interests, we might be interested in grouping and describing physical properties less than 5, and so WE create a logical grouping and subsume P1 thru P4 under the property Q1. So Q1 just means: has physical properties P1, P2, P3, P4, and "not Q1" means that it has physical properties P5, P6, etc. The physical properties are called "first order properties", and our logical groupings, created entirely by us, are called "second order properties."
Another analogy is to consider juniors and seniors in college. We refer to this group as "upper classmen." So "junior" and "senior" are first order properties, and "upper classmen" is a second order property, which is really just a logical grouping of the two first order properties.
So non-reductive physicalism states that our normal talk of mental descriptions ("X believes that P", "Y desires that Q") are just logical groupings of physical properties that cannot be neatly swapped out with physical language, even though everything is ultimately physical. While sometimes colloquially referred to as "property dualism", non-reductive physicalism is better termed as a type of "predicate dualism", meaning that the only dualism happening is in our language, not in reality.
II. The Multiple Realizability objection to identity theory
The main version of reductionism, called "identity theory", states that mental states are brain states. So as an example "pain" is the firing of C-fibers, and the firing of C-fibers is "pain." They are two different names for the same one thing, in the same way that Mark Twain and Samuel Clemens are two different names for the same one person. This means that you cannot logically have pain without the firing of C-fibers and vice versa, in the same way you cannot logically have Mark Twain without Samuel Clemens.
But, according to the multiple realizability argument, you can logically have pain without the firing of C-fibers. Just imagine an alien telling you that he is experiencing pain. But the alien, having a completely different evolutionary history than us, would not have C-fibers. Or think of a future artificial intelligence. Do we want to say that such a being, having circuitry and thus no C-fibers, will never be able to experience pain or any other mental state? The fact that it is even logically possible for such beings to experience mental states is enough for the argument to succeed. It is not logically possible to have Mark Twain without also having Samuel Clemens, for as soon as you are picturing Mark Twain, you are also picturing Samuel Clemens. They are just two names for the same one person.
Likewise, if mental states are just another name for brain states, then it should not be logically possible to picture the one without the other. If the belief that there are 8 planets in our solar system is just another name for the firing of XYZ neurons, then we would have to say that aliens who visit our solar system can never have that belief, since they lack XYZ neurons.
The obvious answer is that mental states can be realized by many different types of physical systems: human brains, alien brains, computers, etc. But if so, then this is non-reductive physicalism: the mind will never be reduced to just physics, since mental states are now abstract concept.
However, since it is controversial whether conceivability is a reliable guide to possibility, defenders of multiple realizability have sought empirical support for their thesis as well.
Empirical Support for Multiple Realizability
Defenders can appeal to biology and evolution. Since mental states like pain and the like can cause animals to adapt to their environment, then there is good reason to think that if aliens have evolved on other planets they too will experience mental states to help them survive. However, since their evolutionary history is different from ours, they would lack the exact same neurons and brains that we have.
Another appeal to empirical support is via neuroscience. Neuroscience has shown how the brain can pick up the slack and take over functions if certain areas get damaged.
III. Reductivist Answers to Multiple Realizability
While the multiple realizability argument has caused a general abandonment of identity theory, there have been responses to it.
One way of responding is to say that "pain" is not some general mental category, but rather there are specific pains and each one is its own separate mental event. For example, pain-in-humans, pain-in-martians, etc. So pain-in-humans would then be identical to a certain brain state, and pain-in-martians would be identical to a specific (martian) brain state.
Another way of responding is to argue that mental events are identical to physical events, and each separate brain (like humans and aliens) have some basic physical commonality. Similar to how we once thought magnetism and electricity were two separate physical domains, but now realize they are just one (electromagnetism).
Despite these answers, the multiple realizability argument has been very influential and the majority of philosophers of mind have shifted over to non-reductive physicalism as a result.
The idea that mind can be realized by many different systems is called "functionalism." Mental properties are defined, not by the firing of specific brain processes like identity theory, but rather by their functions. And a function is an abstract concept. The mental even "pain" for example would be a functional abstract concept like "tissue damage warns organism, and causes retreat" or whatever. This idea of pain is abstract, like a computer algorithm. It can then be realized by many different systems, like human brains, squirrel brains, alien brains, and possibly future AI systems.
Think of functionalism as defining mental states in the same way we might define a "knife." A knife does not have to be made out of anything in particular, as long as it fulfills it's function of being able to cut things. So the abstract concept of a knife (a tool that can cut things) can be realized by many different materials and structures.
So a mental state, like pain, is defined as an abstract concept (tissue damage causing wincing, screaming, etc) that can be realized by many different systems.
V. Objections to Functionalism
The problem with functionalism is the opposite of the problem with identity theory. While identity theory fails to attribute mental states to those that (probably) have them (like aliens, AI, etc), functionalism attributes mental states too freely. This is called the "liberalism" objection, as functionalism is too liberal.
Since functionalism says that a mental state is defined by it's function, then all that is required is for a system to be mental. A mental state is defined as an algorithm (tissue damage input causes screaming output) that can be realized by a physical system. However, imagine that the water molecules in a pond temporarily match that algorithm, due to pure coincidence. Functionalism would say that the pond molecules would be feeling pain. Or think of a robot, who is programmed with these mental algorithms but is empty inside, with no conscious mental state at all. It just goes through the motions. Again, functionalism says that this robot MUST have mental states, because mental states just are the physical realization of algorithms. But if it's even possible to have those functions, but not be mental, then functionalism is false.