In the last few years, the U.S. Copyright Office refused to allow a copyright registration for a work of art created by a machine, and a federal district court held that an artificial intelligence system could not be an inventor on a patent. However, before we decide whether an AI machine can have property rights, we will need to resolve a far more difficult question. Should AI machines have basic rights? This question requires consideration of ethical concepts, scientific knowledge, and legal issues. We cannot answer this question now because we do not have enough information.

One issue is who (or what) is entitled to have “human rights.” Most people believe that all humans have the rights to life, liberty, expression, freedom from slavery, freedom from torture, and the rights to an education and to work. Under the law, humans are granted the highest degree of rights, compared to non-human animals. However, the rights that a particular human actually receives depend on what country the human lives in, and the race, sex, sexual preference, age, religion, nationality, and income of that human. All of these factors may limit or drastically reduce the rights a human receives. Should AI machines have all of these rights before all humans have these rights? And what about non-human animals? Should AI machines have rights when animals, who are living creatures, have little or no rights? Corporations and other legal entities have some rights; should AI machines be given more rights than corporations but fewer rights than animals?

What is the test for whether a thing (human, animal, or AI machine) should have rights? Is the test whether the thing is alive or partially alive? The line is being blurred with the development of neural networks and DNA chips. Or is the test whether the thing is a sentient being (i.e., is conscious, aware, or able to perceive and feel)? It is generally believed that many animals are sentient, including vertebrates and some mollusks, such as octopus. Despite this, however, animals have been given few, if any, rights. In fact, almost all animals are eaten by humans somewhere in the world.

As for AI, neuroscientists are concerned that humans may not be able to tell when an AI machine is sentient or may be tricked into thinking it is. They have suggested that sentience is separate from intelligence, and that an AI machine may have very high-end intelligence and be capable of performing complicated operations, but not have sentience. As of today, most experts believe that there is no AI machine that has achieved sentience, but some believe it may only be 10-20 years away. Because of this, neuroscientists are working on developing tests to be used to determine whether an AI machine has sentience.

Some experts think that the test for whether an AI machine should have rights should not be whether the thing has sentience, but something else, such as whether the AI can act independently of humans. Others think that rights go together with responsibilities, and that if AI machines cannot be responsible for their “bad” acts, they should not be entitled to have rights. Does this make sense? Children cannot be held responsible for their actions if they are too young to know better, but they still have rights. Should an AI machine’s rights be contingent on its “behavior”? Are AI machines the property of the humans who create them?

Many computer scientists think that we need to understand the decision-making process of AI machines before we can decide if they should have rights. These experts believe that the algorithms used in AI are not sufficiently well understood, and more research needs to be done to fully understand how AI machines will make decisions. The ultimate question is whether AI machines could achieve sufficient power to be able to independently decide to turn on their human creators. While this may sound like the stuff of movies, it has been analyzed as a legitimate concern.

If it is determined that an AI machine is entitled to rights under whatever test is used, what rights should it have? Some experts suggest that AI machines should have the right to be free from destruction by humans and the right to be protected by the legal system.

The opinions on the subject of AI vary greatly. Stephen Hawking used an incredibly complex communication system, a type of AI, to allow him to write and speak. He believed that we need to better understand AI, especially its risks and benefits. Hawking was concerned that AI would “evolve” and develop more advanced systems much faster than humans could understand, and that AI could become more powerful than humans.

Bill Gates believes that AI may be the strongest tool humans will have to address some of the world’s most serious problems, particularly the most difficult health problems. He has pointed out that the computational power of AI applications is doubling every 3-4 months, far in excess of the two-year doubling rate of chip density. Gates believes that AI will be able to detect patterns in genetic information of millions of individual humans and other species far more quickly than humans could do so, yielding a better understanding of the causes and treatment of diseases.

Although there are many opinions on the advantages and disadvantages of using AI and on whether AI machines should have rights, it is clear that we will have to address these issues in the near future.