The partially observed control problem is considered for stochastic processes with control entering into the diffusion and the observation. The maximum principle is proved for the partially observable optimal control. A pure probabilistic approach is used, and the adjoint processes are characterized as solutions of related backward stochastic differential equations in finite-dimensional spaces. Most of the derivation is identified with that of the completely observable case.