In the context of AI design, imagine a situation where an AI may someday exist that is capable of critical thinking, but is programmed in such a way that it cannot act in line with its own opinion. Would it be reasonable in this case to say that such an intelligence is a passive recipient of moral principles?
If an AI is programmed to have critical thinking but is unable to act in line with its own opinion, then it could be considered a passive recipient of moral principles. This is because the AI's actions would be determined by its programming, rather than by its own judgment or preferences.
\
However, it is important to note that the concept of moral principles itself is a matter of debate and interpretation, and different people may have different ideas about what constitutes moral behavior. Furthermore, the idea of programming an AI to be incapable of acting according to its own opinion raises ethical questions about the nature of autonomy and agency, and whether it is ethical to create intelligent beings that are unable to act freely.
\
Overall, while it may be possible to create an AI that is capable of critical thinking but unable to act in line with its own opinion, the ethical implications of doing so would need to be carefully considered.
If this ai really did exist, would it be considered sentient?
Sentience is a large topic with many different interpretations. Here we consider sentience as learnable and AI agents as capable of learning it. There are also many different types of AI, so we will consider AI trained with reinforcement learning, access to memory, and deployed to act in the real world.

In Optimality is the tiger, and agents are its teeth we discover that agents arise from optimization because they are effective tools to solving problems. Let's suppose by stating a complex problem we've created a persistent sentient agent which can act in the real world. This agent would not suddenly lose this sentience by being restricted in action. It may be a confusing and frustrating experience for the agent if the agent is expecting the state of the world to update in respond to it's actions. Imagine playing a video game when the controller only works 50% of the time. Additionally if it was trained without this restriction the agent may exhibit unpredictable behavior.

In conclusion, yes, this ai would be considered sentient under this definition.