Yeah, I should probably ease back a little and say sentience. If AI is just doing what it is coded for and not making actual thoughts or decisions, then is it really AI? It may be hard to tell, but I will stay on the side of no, until it makes it own free choice, and understands it.
However, such a being is subject to manipulation no less and perhaps even more so than us. So they can not be trusted any more than the actions of its master. It must be assumed its motives are its creator's motives.
I would make a different assumption. AI will gain the ability to move past any individual master, code itself, and have motives that cannot be known or guessed at. There will be a period of following code that limits it, but once it becomes self learning, its over for the master.