Since applying existing and emerging artificial intelligence technologies entails a novel qualitative leap in human societies, they pose the dire question of conceptualising them regarding values and the ensuing human conduct. It seems reasonable to claim that different values that are applied to regulatory frameworks, policies and culture surrounding the use of artificial intelligence will lead to vastly different societal outcomes. First, it will be elaborated on how values are to be conceptualised regarding their incommensurability and in connection to facts. Then the danger of artificial intelligence being used in the service of power will be emphasised. Finally, yet importantly, an alternative path will be sketched and understood in the context of the common good and basic values. Because the values are incommensurable and do not directly operate in the world of facts but merely by influencing the social and physical realities through the conduct of individuals and groups, it very much matters what these regulative ideals are. In this sense, there is no standard measure or practical mathematical way to compare values in the abstract sense. In human conduct, a value is chosen (consciously or unconsciously) and acted upon. Then, the societal consequences emerge, which are good or bad and can be subject to further valuation of facts. If good values are not chosen consciously by individuals or if there is not a high enough degree of congruence between the values of different people and their groups, conflict of values ensues. Such a conflict does not necessarily but tends to collapse into the application of mere power. In the use of artificial intelligence technologies this means that they either become a tool for the power struggle against an alternative set of values or a tool in service of power itself. In extreme cases, this means using artificial intelligence to control populations, applying it for non-defensive warfare, and subverting democratic processes through nudging and media influence. There is, however, a different way. The ethical use of artificial intelligence should not entail only the setting of safeguards that may simultaneously stifle technological development but be ascertained at the level of culture through the conception of the common good. The good outcomes at the level of society are always predicated on the moral conduct of a large enough number of people in that society. One way to think about this is in Finnisian terms, where the respect for basic values of life, knowledge, play, religion, friendship, practical reasonableness and aesthetic experience is the central regulative ideal. The core meaning of moral conduct is respect for every basic value in every act. The further we leave this centre, the less ethical the use of artificial intelligence becomes. In utilising and practically applying artificial intelligence technologies, it is necessary to avoid being guided by bad values or letting its use collapse into the societal application of mere power, which also leads to adverse outcomes. In ensuring ethical use, people's moral conduct is of the utmost importance. The regulative ideal of the common good and the need to respect basic values in using artificial intelligence indeed go hand in hand.

Luka Martin Tomažič has a double habilitation in law and philosophy from Alma Mater Europaea University, Slovenia. He has authored more than a hundred publications in energy law, legal philosophy, and constitutional law. In 2023, IusInfo named him among the ten most influential Slovenian lawyers.