The perceived trustworthiness of a robot is tied to a belief it will do good, but ultimately this will always throw up problems. Even if you can prove a robot will never make a bad decision (something you can't do with a human), the concept of "good" and "bad" isn't black and white; you can make a bad decision from good motives. We can empathise with people, and ultimately forgive - but it will be a long time before we have this kind of empathy for a robot.