{"id":2436,"date":"2024-06-10T15:34:33","date_gmt":"2024-06-10T15:34:33","guid":{"rendered":"https:\/\/cyfryzacja.org\/?p=2436"},"modified":"2024-06-19T07:10:51","modified_gmt":"2024-06-19T07:10:51","slug":"sztuczna-inteligencja-i-wartosci-wladza-czy-dobro-wspolne","status":"publish","type":"post","link":"https:\/\/cyfryzacja.org\/en\/wiadomosci\/sztuczna-inteligencja-i-wartosci-wladza-czy-dobro-wspolne\/","title":{"rendered":"Artificial Intelligence and Values: Power or the Common Good?"},"content":{"rendered":"<p class=\"translation-block\">Since applying existing and emerging artificial intelligence technologies entails a novel qualitative leap in human societies, they pose the dire question of conceptualising them regarding values and the ensuing human conduct. It seems reasonable to claim that different values that are applied to regulatory frameworks, policies and culture surrounding the use of artificial intelligence will lead to vastly different societal outcomes. First, it will be elaborated on how values are to be conceptualised regarding their incommensurability and in connection to facts. Then the danger of artificial intelligence being used in the service of power will be emphasised. Finally, yet importantly, an alternative path will be sketched and understood in the context of the common good and basic values.\nBecause the values are incommensurable and do not directly operate in the world of facts but merely by influencing the social and physical realities through the conduct of individuals and groups, it very much matters what these regulative ideals are. In this sense, there is no standard measure or practical mathematical way to compare values in the abstract sense. In human conduct, a value is chosen (consciously or unconsciously) and acted upon. Then, the societal consequences emerge, which are good or bad and can be subject to further valuation of facts.\nIf good values are not chosen consciously by individuals or if there is not a high enough degree of congruence between the values of different people and their groups, conflict of values ensues. Such a conflict does not necessarily but tends to collapse into the application of mere power. In the use of artificial intelligence technologies this means that they either become a tool for the power struggle against an alternative set of values or a tool in service of power itself. In extreme cases, this means using artificial intelligence to control populations, applying it for non-defensive warfare, and subverting democratic processes through nudging and media influence. \nThere is, however, a different way. The ethical use of artificial intelligence should not entail only the setting of safeguards that may simultaneously stifle technological development but be ascertained at the level of culture through the conception of the common good. The good outcomes at the level of society are always predicated on the moral conduct of a large enough number of people in that society. One way to think about this is in Finnisian terms, where the respect for basic values of life, knowledge, play, religion, friendship, practical reasonableness and aesthetic experience is the central regulative ideal. The core meaning of moral conduct is respect for every basic value in every act. The further we leave this centre, the less ethical the use of artificial intelligence becomes.\nIn utilising and practically applying artificial intelligence technologies, it is necessary to avoid being guided by bad values or letting its use collapse into the societal application of mere power, which also leads to adverse outcomes. In ensuring ethical use, people's moral conduct is of the utmost importance. The regulative ideal of the common good and the need to respect basic values in using artificial intelligence indeed go hand in hand.<\/p>\n\n\n\n<figure class=\"wp-block-image size-thumbnail\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/cyfryzacja.org\/wp-content\/uploads\/2024\/05\/ameu_portreti_2023_03_10_MAR_9515_pigac_si-150x150.jpg\" alt=\"\" class=\"wp-image-2342\" srcset=\"https:\/\/cyfryzacja.org\/wp-content\/uploads\/2024\/05\/ameu_portreti_2023_03_10_MAR_9515_pigac_si-150x150.jpg 150w, https:\/\/cyfryzacja.org\/wp-content\/uploads\/2024\/05\/ameu_portreti_2023_03_10_MAR_9515_pigac_si-11x12.jpg 11w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/figure>\n\n\n\n<p class=\"has-small-font-size\"><em>Luka Martin Toma\u017ei\u010d has a double habilitation in law and philosophy from Alma Mater Europaea University, Slovenia. He has authored more than a hundred publications in energy law, legal philosophy, and constitutional law. In 2023, IusInfo named him among the ten most influential Slovenian lawyers.<\/em><\/p>","protected":false},"excerpt":{"rendered":"<p>Poniewa\u017c zastosowanie istniej\u0105cych i powstaj\u0105cych technologii sztucznej inteligencji poci\u0105ga za sob\u0105 nowy skok jako\u015bciowy w spo\u0142ecze\u0144stwach ludzkich, stawiaj\u0105 one powa\u017cn\u0105 kwesti\u0119 konceptualizacji ich w odniesieniu do warto\u015bci i wynikaj\u0105cego z nich ludzkiego post\u0119powania. Rozs\u0105dne wydaje si\u0119 twierdzenie, \u017ce r\u00f3\u017cne warto\u015bci stosowane w ramach regulacyjnych, politykach i kulturze zwi\u0105zanej z wykorzystaniem sztucznej inteligencji doprowadz\u0105 do bardzo [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[143,114],"tags":[110,144],"class_list":["post-2436","post","type-post","status-publish","format-standard","hentry","category-ai","category-doc-dr-luka-martin-tomazic","tag-ai","tag-aksjologia"],"_links":{"self":[{"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/posts\/2436","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/comments?post=2436"}],"version-history":[{"count":2,"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/posts\/2436\/revisions"}],"predecessor-version":[{"id":2460,"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/posts\/2436\/revisions\/2460"}],"wp:attachment":[{"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/media?parent=2436"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/categories?post=2436"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cyfryzacja.org\/en\/wp-json\/wp\/v2\/tags?post=2436"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}