I am not completely certain that I am using the correct terminology in the title so I will define how I am using the terms.
In this thread, an AI robot and an AI life form can be alike in terms of capabilities such as being autonomous, being able to learn, and even resembling humans or other animals when it is ethical.
The difference is that the AI robot is not given a sense of self, in terms of value, in comparison to existing life forms. In its programming, it never considers whether it is moral to do something. The machine could run programs, which can be so complex that it could be considered an android. It could even download new functionality in real time. However, the morality of the programs are left to the designers of the programs.
An AI life form, on the other hand, is the result of an effort to create an AI machine that has a morality system. It is able to make evaluations or comparisons between the value of itself and other life forms. The machine may also be able to modify its own value system through learning systems. In its highest form, it might be able to attempt any level of self determination as humans can. It might decide to travel to Jupiter, for example.
Do you think that both of these types of AI machines are ethical?
Personally, I would limit myself to robots.