The perceived trustworthiness of a robot is tied to a belief it will do good, but ultimately this will always throw up problems. Even if you can prove a robot will never make a bad decision (something you can't do with a human), the concept of "good" and "bad" isn't black and white; you can make a bad decision from good motives. We can empathise with people, and ultimately forgive - but it will be a long time before we have this kind of empathy for a robot.
Once we move to truly autonomous systems, software will play a much bigger part. We will no longer need a human to decide when to change the route of an aircraft or when to turn off the motorway onto a side road. It is at this point that many of us start to worry. If a machine can truly make its own decisions then how do we know it is safe?
http://mashable.com/2014/03/11/robot-decisions-autonomous-systems/