IMO, the transparency. Often AIs are created to fill a certain task. Nonetheless, there's no more room for questions that might be asked/request beyond AI's capacity. In addition, the feeling of uncertainty towards them. Take a look at this video:
, as soon as the man start kicking it the dilemma would hit you quickly, having several questions in mind, like that ain't right kicking it upto feeling bit nervous having them existed. Anyway, you might get confused with my answer but these are just imo few of the possible answers, there's more to it. If you wanna know more here's the video of TEDx Talks discussing such dilemmas
The lack of AI's emotional connection towards things might put off some people away from AI.
One scenario that might be a good example is about a self-driving, AI-powered vehicle going on high speeds and suddenly sees 2 lanes, 1 obstructed with 5 people, and the other, just only one, but is a close family member. In such case, the AI is most likely to follow Utilitarian approach, and choose to stay on the lane where only one person is located (which would be your family member), rather than those 5 persons on the other. And while it may sound good, the emotional distress on a person when such thing happens might be more than what he can take when he run over the other option.
Other superficial yet possible scenarios include AI's machine learning might learn that humans actually destroy the planet, or people just don't like anything that is AI-powered and operates something their life could be on risk.
Surely creating this Ai (artificial intelligence) or developing them is a great technology. I think they doesn't have emotion obviously so this AI cannot be a real human, basically, they cannot really understand a human. And also learning this AI if completely developed could be dangerous since it learns from humans, we might lose control to it in the end.