In “The Objectivist Ethics,” Rand postulated that if one stipulates an immortal robot, no values are possible. Many of us have puzzeled over this for some time. I think that now the answer is clear. If this robot incorporates meaning in its system, and therefor actual values, then it would keep acting until it was no longer possible to act. If this is forever, so be it. This is really the the same issue as the suicide question. Beyond the biochemical, the action of a conceptual being arises from meaning, and again where there is meaning, action takes place.
The fact that Rand stipulates a robot implies a being without consciousness. If there is no consciousness, the example is irrelevant. If we do have a consciousness, however formed, which is a conceptual consciousness, and if it also finds meaning in reality, it acts; if not, then it will not act. I think the use of a robot analogy simply obscures the issue a little bit. I believe that Rand’s choice of example is the result of her sort of Natural Science approach and in absence of an extensive and explicit grasp of meaning in morality. I must add, she was aware of meaning but, seemingly, only implicitly. It is very clear from her other writing, especially her fiction, that meaning underlies her philosophy.