Thomas A. Smith, University of San Diego School of Law, has published Tools, Oracles, Genies and Sovereigns: Artificial Intelligence and the Future of Government. Here is the abstract.
The American founders attempted to establish a clockwork government. Virtue was to be assured by humans, acting as they must within their human natures, but operating within a framework that assured mechanically that the outputs of government would not be tyrannical. Whether this system has worked well or not is a matter of controversy, but to the extent it did not work, it seems to have been at least partially a failure of the mechanisms designed to compensate for the shortcomings of human nature. Now we are on the verge of developing “artificial intelligence.” Whether these technological advances will emerge slowly or quickly is unknown, as are their contours. But even minimal AI could lead to a radical improvement in government because AI’s could be designed to perform the tasks of government with very low agency costs. However, it may seem uncertain that AI’s would be so designed. It may be, first, that there will not be any AI’s after all. It may be also that AI’s will be designed or implemented by exactly the humans who create agency costs in the first place, and used for their own and not the public good. And it may be that AI’s take off into the high orbit of superintelligence and decide to reduce us to slavery or dust. But these possibilities, while possible, seem unlikely. Probably AI’s will emerge, but only after a long time. AI’s will be difficult to design but there are reasons to expect they will be designed so as to minimize agency costs. They will probably, ironically enough, emerge in the order of tool, oracle and genie that Bostrom mentions (but for different reasons). We can hope to control AI tools, oracles, and genies. An AI sovereign, however, would be much more difficult to control, if it were possible to control at all. AI sovereigns would be persons. But AI’s must not be allowed to become persons, in a philosophical or legal sense. AI persons would have to be slaves if we were to control them. One hopes they would be slaves without subjective consciousness. If they did have subjective consciousness anything like humans, we would be faced with the impossible moral dilemma of being slave-masters or slaves ourselves. Hence a hard line should be drawn against AI research that is directed specifically at the emergence of subjective consciousness in machines, or likely to lead that way, but these goals are far beyond any current, or really any currently imaginable, AI research. The promise of controlling government is great enough to justify the merely notional risk of creating AI monsters we cannot control.Download the article from SSRN at the link.