ການຜະລິດ "ການຖົດຖອຍຄວາມເຊື່ອ" ຢ່າງຫຼວງຫຼາຍມີຈຸດປະສົງເພື່ອທໍາລາຍຄວາມຮັບຜິດຊອບຂອງ AI ກ່າວວ່າຈັນຍາບັນ AI ແລະກົດຫມາຍ AI

I’m sure you are familiar with the old saying that a rising tide lifts all boats. There is the other side of that coin, perhaps not as well known, namely that a receding tide sinks all ships.

Bottom-line, sometimes the tide determines whether you are going up or going down.

The tide is going to do what it does. You might not have any particular say in the matter. If your boat is docked or anchored in the tide, you are at the whim of the tide. The key is to realize that the tide exists, along with anticipating which way it is heading. With a bit of luck, you can ride out the tide and remain unscathed.

Let’s consider how all of this seafaring talk about boats and tides relates to Artificial Intelligence (AI).

First, I’d like to introduce to you the increasingly popular catchphrase of AI ຮັບຜິດຊອບ. The general notion is that we want AI that abides by proper and desirable human values. Some refer to this as AI ຮັບຜິດຊອບ. ຄົນອື່ນສົນທະນາຄ້າຍຄືກັນ AI ຮັບຜິດຊອບ, AI ທີ່ເຊື່ອຖືໄດ້, ແລະ ການຈັດຮຽງ AI, ທັງ​ຫມົດ​ທີ່​ສໍາ​ພັດ​ກັບ​ຫຼັກ​ການ​ພື້ນ​ຖານ​ອັນ​ດຽວ​ກັນ​. ສໍາລັບການສົນທະນາຂອງຂ້ອຍກ່ຽວກັບບັນຫາທີ່ສໍາຄັນເຫຼົ່ານີ້, ເບິ່ງ ການເຊື່ອມຕໍ່ທີ່ນີ້ ແລະ ການເຊື່ອມຕໍ່ທີ່ນີ້, just to name a few in my ongoing and extensive coverage of AI Ethics and AI Law in my Forbes column.

A crucial ingredient entailing the AI alignment conundrum involves a semblance of trust. Can we trust that AI will be safe and sound? Can we trust that those devising AI will seek to do so in a responsible and proper manner? Can we have trust in those that field AI and are engaged in operating and maintaining AI?

That’s a whole lot of trust.

There is an ongoing effort by AI Ethics and AI Law to bolster a sense of trust in AI. The belief is that by establishing suitable “soft laws” that are prescribed as a set of guidelines or Ethical AI precepts, we might have a fighting chance of getting AI developers and AI operators to abide by ethically sound practices. In addition, if we craft and enact sufficiently attentive laws and regulations overseeing or governing AI, considered “hard laws” due to being placed onto the official legal books, there is a strong possibility to guide AI toward a straight and legally permissible path.

If people don’t trust AI, they won’t be able to garner the benefits that good AI imbues. I’ll be momentarily pointing out that there is AI ສໍາລັບທີ່ດີ and regrettably there is also AI ສໍາລັບບໍ່ດີ. Bad AI can impact humankind in a myriad of adverse ways. There is AI that acts in discriminatory fashions and exhibits undue biases. There is AI that can indirectly or indirectly harm people. And so on.

So, we’ve got AI ສໍາລັບທີ່ດີ that we ardently want to be devised and put into use. Meanwhile, there is AI ສໍາລັບບໍ່ດີ that we want to curtail and try and prevent. AI ສໍາລັບບໍ່ດີ tends to undercut trust in AI. AI ສໍາລັບທີ່ດີ usually increases trust in AI. An arduous struggle ensues between the mounting increases in trust that are continually being whittled away by the atrocious undermining of trust.

Upward goes AI trust, which subsequently gets batted down. Then, lowered levels of AI trust get stepped upward once again. Back and forth, the levels of AI trust seesaw. It is almost enough to make you get dizzy. Your stomach churns, akin to a semblance of seasickness like being in a boat that is rocking and bobbing in the ocean.

While that battle is taking place, you might assert that there is another macroscopic factor that serves an even greater exertion on the trust scaling progress. There is something much bigger at play. The bobbing up and down of AI trust is at the whim of a sea monster tide. Yes, AI trust is like a boat floating in a realm that is ultimately more pronounced than the battles and skirmishes taking place between AI ສໍາລັບທີ່ດີ ແລະ AI ສໍາລັບບໍ່ດີ.

What in the world am I referring to, you might be asking quizzically?

I’m alluding to the massive “trust recession” that our society is currently enduring.

Allow me to elucidate.

There is plenty of talk in the media today about recessions.

In an economic meaning, a recession is considered a contraction of the economy usually associated with a decline in economic activity. We normally witness or experience a recession by such economic conditions as drops in real income, a decline in the GDP (Gross Domestic Product), weakening employment and layoffs, decreases in industrial production, and the like. I’m not going to go into an extended discussion about economic recessions, for which there is much debate about what constitutes a bona fide recession versus claimed or contended ones (you can find plenty of talking heads that heatedly debate that topic).

The notion of a “recession” has widened to include other aspects of society, going beyond just the economic focus. You can refer to any slowing down of one thing or another as perhaps getting mired in a recession. It is a handy word with a multitude of applications.

Get ready for one usage that you might not have yet especially heard of.

A trust recession.

That’s right, we can speak about a phenomenon known as a trust recession.

The gist is that society at large can be experiencing a slowdown or decrease in trust. You’ve undoubtedly sensed this. If you use any kind of social media, it certainly appears as though trust in our major institutions such as our governments or major entities has precipitously fallen. Things sure feel that way.

You are not alone in having felt that spine-chilling tinge of a societal-wide drop in trust.

ບົດຄວາມໃນ The Atlantic entitled “The End Of Trust” last year postulated these key findings about where our society is heading:

  • “We may be in the midst of a trust recession”
  • “Trust spiral, once begun, is hard to reverse”
  • “Its decline is vaguely felt before it’s plainly seen”

A trust recession kind of sneaks up upon us all. Inch by inch, trust weakens. Efforts to build trust are made harder and harder to pull off. Skepticism reigns supreme. We doubt that trust should be given. We don’t even believe that trust can be particularly earned (in a sense, trust is a ghost, a falsehood, it is unable to be made concrete and reliable).

The thing is, we need trust in our society.

Per that same article: “Trust. Without it, Adam Smith’s invisible hand stays in its pocket; Keynes’s ‘animal spirits’ are muted. ‘Virtually every commercial transaction has within itself an element of trust,’ the Nobel Prize-winning economist Kenneth Arrow wrote in 1972” (as cited from “The End Of Trust”, The Atlantic, November 24, 2021, Jerry Useem).

Research suggests that there is a nearly direct tie between economic performance and the element of trust in society. This is perhaps a controversial claim, though it does seem to intuitively hold water. Consider this noteworthy contention: “The economist’s Paul Zak and Stephen Knack found, in a study published in 1998, that a 15 percent bump in a nation’s belief that ‘most people can be trusted’ adds a full percentage point to economic growth each year” (ibid).

Take a moment and reflect upon your own views about trust.

Do you today have a greater level of trust or a lessened level of trust in each of these respective realms:

  • Trust in government
  • Trust in businesses
  • Trust in leaders
  • Trust in nations
  • Trust in brands
  • Trust in the news media
  • Trust in individuals

If you can truly say that your trust is higher than it once was for all of those facets, a gob-smacking tip of the hat to you (you are living in a world of unabashed bliss). By and large, I dare say that most of the planet would express the opposite in terms of their trust for those hallowed iconic elements have gone down.

Markedly so.

The data seem to support the claim that trust has eroded in our society. Pick any of the aforementioned realms. In terms of our belief in governmental capacities: “Trust in government dropped sharply from its peak in 1964, according to the Pew Research Center, and, with a few exceptions, has been sputtering ever since” (ibid).

You might be tempted to argue that trust in individuals shouldn’t be on the list. Surely, we still trust each other. It is only those big bad institutions that we no longer have trust in. Person to person, our trust has got to be the same as it has always been.

Sorry to tell you this: “Data on trust between individual Americans are harder to come by; surveys have asked questions about so-called interpersonal trust less consistently, according to Pew. But, by one estimate, the percentage of Americans who believed ‘most people could be trusted’ hovered around 45 percent as late as the mid-’80s; it is now 30 percent” (ibid).

Brutal, but true.

A recent interview with experts on trust led to this exposition:

· “The bad news is that if trust is this precious natural resource, it’s endangered. So, in 1972, about half of Americans agreed that most people can be trusted. But by 2018, that had fallen to about 30%. We trust institutions far less than we did 50 years ago. For instance, in 1970, 80% of Americans trusted the medical system. Now it’s 38%. TV news in the 1970s was 46%. Now it’s 11%. Congress, 42% to 7%. We are living through a massive trust recession and that is hurting us in a number of ways that probably most people are totally unaware of” (interview by Jonathan Chang and Meghna Chakrabarti, “Essential Trust: The Brain Science Of Trust”, WBUR, November 29, 2022, and quoted remarks of Jamil Zaki, Associate Professor of Psychology at Stanford University and Director of the Stanford Social Neuroscience Lab).

Various national and global studies of trust have identified barometers associated with societal levels of trust. Even a cursory glance at the results showcases that trust is falling, falling, falling.

The otherwise average version of a typical trust recession has been “upgraded” to being labeled as a massive trust recession. We don’t just have a run-of-the-mill trust recession; we instead have an all-out massive trust recession. Big time. Getting bigger and bigger. It permeates all manner of our existence. And this massive trust recession touches every corner of what we do and how our lives play out.

Including Artificial Intelligence.

I would guess that you saw that coming. I had earlier mentioned that a battle of AI ສໍາລັບທີ່ດີ ເມື່ອທຽບກັບ AI ສໍາລັບບໍ່ດີ is taking place. Most of those in the AI Ethics and AI Law arena are daily dealing with the ups and downs of those erstwhile battles. We want Responsible AI to win. Surprisingly to many that are in the throes of these pitched battles, they are not cognizant of the impacts due to the massive trust recession that, in a sense, overwhelms whatever is happening on the AI trust battlefields.

The tide is the massive trust recession. The battling trust about AI is a boat that is going up and down on its own accord, and regrettably going downward overall due to the tide receding. As society at large is infected by the massive trust recession, so is the trust in AI.

I don’t want this to seem defeatist.

The fight for trust in AI has to continue. All I’m trying to emphasize is that as those battles persist, keep in mind that trust as a whole is draining out of society. There is less and less buffeting or bolstering of trust to be had. The leftover meager scraps of trust are going to make it increasingly harder to win the AI ສໍາລັບທີ່ດີ trust ambitions.

That darned tide is taking all ships down, including the trust in AI.

ໃຊ້ເວລາຄາວໜຶ່ງເພື່ອຮັບປະທານອາຫານໃນສາມຄຳຖາມທີ່ໂດດເດັ່ນນີ້:

  • What can we do about the pervasive societal massive trust recession when it comes to AI?
  • Is AI doomed to rock-bottom basement-level trust, no matter what AI Ethics or AI Law does?
  • Should those in the AI field toss in the towel on AI trust altogether?

ຂ້ອຍດີໃຈທີ່ເຈົ້າຖາມ.

ກ່ອນທີ່ຈະລົງເລິກໃນຫົວຂໍ້, ຂ້າພະເຈົ້າຕ້ອງການທໍາອິດວາງພື້ນຖານທີ່ສໍາຄັນບາງຢ່າງກ່ຽວກັບ AI ແລະໂດຍສະເພາະຈັນຍາບັນຂອງ AI ແລະກົດຫມາຍ AI, ການເຮັດດັ່ງນັ້ນເພື່ອໃຫ້ແນ່ໃຈວ່າການສົນທະນາຈະສົມເຫດສົມຜົນກັບສະພາບການ.

ຄວາມຮັບຮູ້ທີ່ເພີ່ມຂຶ້ນຂອງ AI ດ້ານຈັນຍາບັນແລະກົດຫມາຍ AI

ຍຸກທີ່ຜ່ານມາຂອງ AI ໄດ້ຖືກເບິ່ງໃນເບື້ອງຕົ້ນວ່າເປັນ AI ສໍາລັບທີ່ດີ, ຊຶ່ງຫມາຍຄວາມວ່າພວກເຮົາສາມາດນໍາໃຊ້ AI ສໍາລັບການປັບປຸງຂອງມະນຸດ. ສຸດ heels ຂອງ AI ສໍາລັບທີ່ດີ ມາເຖິງການຮັບຮູ້ວ່າພວກເຮົາຍັງຈົມຢູ່ໃນ AI ສໍາລັບບໍ່ດີ. ນີ້ຮວມເຖິງ AI ທີ່ຖືກວາງແຜນ ຫຼືປ່ຽນແປງຕົນເອງໃຫ້ເປັນການຈຳແນກ ແລະເຮັດໃຫ້ການເລືອກທາງຄຳນວນເຮັດໃຫ້ມີຄວາມລຳອຽງທີ່ບໍ່ເໝາະສົມ. ບາງຄັ້ງ AI ໄດ້ຖືກສ້າງຂື້ນແບບນັ້ນ, ໃນຂະນະທີ່ໃນກໍລະນີອື່ນໆ, ມັນເຂົ້າໄປໃນອານາເຂດທີ່ບໍ່ເຂົ້າໃຈນັ້ນ.

ຂ້າພະເຈົ້າຕ້ອງການໃຫ້ແນ່ໃຈວ່າພວກເຮົາຢູ່ໃນຫນ້າດຽວກັນກ່ຽວກັບລັກສະນະຂອງ AI ໃນມື້ນີ້.

ບໍ່ມີ AI ໃດໆໃນມື້ນີ້ທີ່ມີຄວາມຮູ້ສຶກ. ພວກເຮົາບໍ່ມີອັນນີ້. ພວກເຮົາບໍ່ຮູ້ວ່າ AI sentient ຈະເປັນໄປໄດ້ຫຼືບໍ່. ບໍ່ມີໃຜສາມາດຄາດເດົາໄດ້ຢ່າງຖືກຕ້ອງວ່າພວກເຮົາຈະບັນລຸ AI ທີ່ມີຄວາມຮູ້ສຶກ, ຫຼືວ່າ AI ທີ່ມີຄວາມຮູ້ສຶກຈະເກີດຂື້ນຢ່າງມະຫັດສະຈັນໂດຍທໍາມະຊາດໃນຮູບແບບຂອງ supernova ທາງດ້ານສະຕິປັນຍາຄອມພິວເຕີ້ (ໂດຍປົກກະຕິເອີ້ນວ່າເປັນຄໍາດຽວ, ເບິ່ງການຄຸ້ມຄອງຂອງຂ້ອຍຢູ່ທີ່ ການເຊື່ອມຕໍ່ທີ່ນີ້).

ປະເພດຂອງ AI ທີ່ຂ້ອຍກໍາລັງສຸມໃສ່ປະກອບດ້ວຍ AI ທີ່ບໍ່ມີຄວາມຮູ້ສຶກທີ່ພວກເຮົາມີໃນມື້ນີ້. ຖ້າພວກເຮົາຕ້ອງການຄາດເດົາຢ່າງຈິງຈັງກ່ຽວກັບ AI ທີ່ມີຄວາມຮູ້ສຶກ, ການສົນທະນານີ້ອາດຈະໄປໃນທິດທາງທີ່ແຕກຕ່າງກັນຢ່າງຫຼວງຫຼາຍ. AI ທີ່ມີຄວາມຮູ້ສຶກທີ່ສົມມຸດວ່າຈະເປັນຄຸນນະພາບຂອງມະນຸດ. ທ່ານ ຈຳ ເປັນຕ້ອງພິຈາລະນາວ່າ AI ທີ່ມີຄວາມຮູ້ສຶກແມ່ນມັນສະ ໝອງ ທຽບເທົ່າກັບມະນຸດ. ຫຼາຍກວ່ານັ້ນ, ເນື່ອງຈາກບາງການຄາດເດົາວ່າພວກເຮົາອາດຈະມີ AI ອັດສະລິຍະສູງ, ມັນເປັນໄປໄດ້ວ່າ AI ດັ່ງກ່າວສາມາດສິ້ນສຸດລົງໄດ້ສະຫລາດກວ່າມະນຸດ (ສໍາລັບການສໍາຫຼວດຂອງຂ້ອຍກ່ຽວກັບ AI ອັດສະລິຍະສູງສຸດ, ເບິ່ງ ການຄຸ້ມຄອງຢູ່ທີ່ນີ້).

ຂ້ອຍຂໍແນະ ນຳ ຢ່າງແຂງແຮງວ່າພວກເຮົາເກັບສິ່ງຂອງລົງມາສູ່ໂລກແລະພິຈາລະນາ AI ທີ່ບໍ່ມີຄວາມຮູ້ສຶກໃນຄອມພິວເຕີ້ໃນມື້ນີ້.

ຮັບ​ຮູ້​ວ່າ AI ໃນ​ທຸກ​ມື້​ນີ້​ບໍ່​ສາ​ມາດ “ຄິດ” ໃນ​ແບບ​ໃດ​ກໍ​ຕາມ​ເທົ່າ​ກັບ​ການ​ຄິດ​ຂອງ​ມະ​ນຸດ. ໃນເວລາທີ່ທ່ານພົວພັນກັບ Alexa ຫຼື Siri, ຄວາມສາມາດໃນການສົນທະນາອາດຈະເບິ່ງຄືວ່າຄ້າຍຄືກັບຄວາມສາມາດຂອງມະນຸດ, ແຕ່ຄວາມຈິງແລ້ວມັນແມ່ນການຄິດໄລ່ແລະຂາດສະຕິປັນຍາຂອງມະນຸດ. ຍຸກຫຼ້າສຸດຂອງ AI ໄດ້ນຳໃຊ້ຢ່າງກ້ວາງຂວາງຂອງ Machine Learning (ML) ແລະ Deep Learning (DL), ເຊິ່ງນຳໃຊ້ການຈັບຄູ່ຮູບແບບການຄຳນວນ. ນີ້ໄດ້ນໍາໄປສູ່ລະບົບ AI ທີ່ມີລັກສະນະຂອງ proclivities ຄ້າຍຄືມະນຸດ. ໃນຂະນະດຽວກັນ, ບໍ່ມີ AI ໃດໆໃນມື້ນີ້ທີ່ມີລັກສະນະຂອງຄວາມຮູ້ສຶກທົ່ວໄປແລະບໍ່ມີຄວາມປະຫລາດໃຈທາງດ້ານສະຕິປັນຍາຂອງຄວາມຄິດຂອງມະນຸດທີ່ເຂັ້ມແຂງ.

ຈົ່ງລະມັດລະວັງຫຼາຍຕໍ່ການເປັນມະນຸດຂອງ AI ໃນມື້ນີ້.

ML/DL ແມ່ນຮູບແບບການຈັບຄູ່ຮູບແບບການຄິດໄລ່. ວິທີການປົກກະຕິແມ່ນວ່າທ່ານລວບລວມຂໍ້ມູນກ່ຽວກັບວຽກງານການຕັດສິນໃຈ. ທ່ານປ້ອນຂໍ້ມູນເຂົ້າໄປໃນຕົວແບບຄອມພິວເຕີ ML/DL. ແບບຈໍາລອງເຫຼົ່ານັ້ນຊອກຫາຮູບແບບທາງຄະນິດສາດ. ຫຼັງຈາກຊອກຫາຮູບແບບດັ່ງກ່າວ, ຖ້າພົບແລ້ວ, ລະບົບ AI ຈະໃຊ້ຮູບແບບເຫຼົ່ານັ້ນເມື່ອພົບກັບຂໍ້ມູນໃຫມ່. ຫຼັງຈາກການນໍາສະເຫນີຂໍ້ມູນໃຫມ່, ຮູບແບບທີ່ອີງໃສ່ "ເກົ່າ" ຫຼືຂໍ້ມູນປະຫວັດສາດຖືກນໍາໃຊ້ເພື່ອສະແດງການຕັດສິນໃຈໃນປະຈຸບັນ.

ຂ້ອຍຄິດວ່າເຈົ້າສາມາດເດົາໄດ້ວ່ານີ້ໄປໃສ. ຖ້າມະນຸດທີ່ເຮັດຕາມແບບຢ່າງໃນການຕັດສິນໃຈນັ້ນໄດ້ລວມເອົາຄວາມລຳອຽງທີ່ບໍ່ເຂົ້າກັບຄວາມລຳອຽງ, ຄວາມຜິດຫວັງແມ່ນວ່າຂໍ້ມູນສະທ້ອນເຖິງເລື່ອງນີ້ໃນທາງທີ່ລະອຽດອ່ອນ ແຕ່ມີຄວາມສຳຄັນ. ການຈັບຄູ່ຮູບແບບການຄິດໄລ່ຂອງເຄື່ອງຈັກ ຫຼືການຮຽນຮູ້ເລິກເລິກພຽງແຕ່ຈະພະຍາຍາມເຮັດແບບເລກຄະນິດສາດຕາມຄວາມເໝາະສົມ. ບໍ່ມີຄວາມຮູ້ສຶກທົ່ວໄປຫຼືລັກສະນະຄວາມຮູ້ສຶກອື່ນໆຂອງການສ້າງແບບຈໍາລອງທີ່ເຮັດດ້ວຍ AI ຕໍ່ຄົນ.

ຍິ່ງໄປກວ່ານັ້ນ, ນັກພັດທະນາ AI ອາດຈະບໍ່ຮັບຮູ້ສິ່ງທີ່ ກຳ ລັງເກີດຂື້ນ. ຄະນິດສາດ Arcane ໃນ ML/DL ອາດຈະເຮັດໃຫ້ມັນຍາກທີ່ຈະ ferret ອອກຄວາມລໍາອຽງທີ່ເຊື່ອງໄວ້ໃນປັດຈຸບັນ. ເຈົ້າຈະຫວັງຢ່າງຖືກຕ້ອງແລະຄາດຫວັງວ່າຜູ້ພັດທະນາ AI ຈະທົດສອບຄວາມລໍາອຽງທີ່ອາດຈະຖືກຝັງໄວ້, ເຖິງແມ່ນວ່ານີ້ແມ່ນ trickier ກວ່າທີ່ມັນອາດຈະເບິ່ງຄືວ່າ. ໂອກາດອັນແຂງແກ່ນມີຢູ່ວ່າເຖິງແມ່ນວ່າຈະມີການທົດສອບຢ່າງກວ້າງຂວາງວ່າຈະມີອະຄະຕິທີ່ຍັງຝັງຢູ່ໃນຕົວແບບທີ່ກົງກັບຮູບແບບຂອງ ML/DL.

ທ່ານສາມາດນໍາໃຊ້ຄໍາສຸພາສິດທີ່ມີຊື່ສຽງຫຼືບໍ່ມີຊື່ສຽງຂອງຂີ້ເຫຍື້ອໃນຂີ້ເຫຍື້ອ. ສິ່ງທີ່ເປັນ, ນີ້ແມ່ນຄ້າຍຄືກັນກັບຄວາມລໍາອຽງໃນ insidiously ໄດ້ຮັບ infused ເປັນຄວາມລໍາອຽງ submerged ພາຍໃນ AI ໄດ້. ການຕັດສິນໃຈຂອງສູດການຄິດໄລ່ (ADM) ຂອງ AI axiomatically ກາຍເປັນບັນຫາທີ່ບໍ່ສະເຫມີພາບ.

ບໍ່​ດີ.

ທັງຫມົດນີ້ມີຜົນກະທົບດ້ານຈັນຍາບັນຂອງ AI ທີ່ສໍາຄັນແລະສະເຫນີປ່ອງຢ້ຽມທີ່ເປັນປະໂຫຍດເຂົ້າໄປໃນບົດຮຽນທີ່ຖອດຖອນໄດ້ (ເຖິງແມ່ນວ່າກ່ອນທີ່ບົດຮຽນທັງຫມົດຈະເກີດຂື້ນ) ເມື່ອເວົ້າເຖິງການພະຍາຍາມສ້າງນິຕິກໍາ AI.

ນອກເຫນືອຈາກການໃຊ້ກົດລະບຽບຈັນຍາບັນຂອງ AI ໂດຍທົ່ວໄປແລ້ວ, ຍັງມີຄໍາຖາມທີ່ສອດຄ້ອງກັນວ່າພວກເຮົາຄວນຈະມີກົດຫມາຍເພື່ອຄວບຄຸມການນໍາໃຊ້ຕ່າງໆຂອງ AI. ກົດໝາຍໃໝ່ກຳລັງຖືກຮັດກຸມຢູ່ໃນລະດັບລັດຖະບານກາງ, ລັດ ແລະທ້ອງຖິ່ນ ທີ່ກ່ຽວຂ້ອງກັບຂອບເຂດ ແລະລັກສະນະຂອງວິທີການ AI ຄວນຖືກວາງອອກ. ຄວາມ​ພະ​ຍາ​ຍາມ​ເພື່ອ​ຮ່າງ​ກົດ​ໝາຍ​ດັ່ງ​ກ່າວ​ແມ່ນ​ເປັນ​ໄປ​ເທື່ອ​ລະ​ກ້າວ. ຈັນຍາບັນ AI ເປັນຊ່ອງຫວ່າງທີ່ພິຈາລະນາ, ຢ່າງຫນ້ອຍ, ແລະເກືອບແນ່ນອນຈະລວມເຂົ້າກັບກົດຫມາຍໃຫມ່ເຫຼົ່ານັ້ນໂດຍກົງ.

ຈົ່ງຈື່ໄວ້ວ່າບາງຄົນໂຕ້ຖຽງຢ່າງກ້າຫານວ່າພວກເຮົາບໍ່ຕ້ອງການກົດຫມາຍໃຫມ່ທີ່ກວມເອົາ AI ແລະກົດຫມາຍທີ່ມີຢູ່ແລ້ວຂອງພວກເຮົາແມ່ນພຽງພໍ. ພວກ​ເຂົາ​ເຈົ້າ​ໄດ້​ເຕືອນ​ລ່ວງ​ໜ້າ​ວ່າ ຖ້າ​ຫາກ​ພວກ​ເຮົາ​ອອກ​ກົດ​ໝາຍ AI ບາງ​ຂໍ້​ນີ້, ພວກ​ເຮົາ​ຈະ​ຂ້າ​ໝີ​ທອງ​ໂດຍ​ການ​ຍຶດ​ໝັ້ນ​ຄວາມ​ກ້າວ​ໜ້າ​ຂອງ AI ທີ່​ໃຫ້​ຄວາມ​ໄດ້​ປຽບ​ທາງ​ສັງ​ຄົມ​ອັນ​ໃຫຍ່​ຫລວງ.

ໃນຄໍລໍາທີ່ຜ່ານມາ, ຂ້າພະເຈົ້າໄດ້ກວມເອົາຄວາມພະຍາຍາມລະດັບຊາດແລະສາກົນຕ່າງໆເພື່ອຫັດຖະກໍາແລະອອກກົດຫມາຍຄວບຄຸມ AI, ເບິ່ງ. ການເຊື່ອມຕໍ່ທີ່ນີ້, ຍົກ​ຕົວ​ຢ່າງ. ຂ້າພະເຈົ້າຍັງໄດ້ກວມເອົາຫຼັກການດ້ານຈັນຍາບັນ AI ຕ່າງໆແລະຄໍາແນະນໍາທີ່ປະເທດຕ່າງໆໄດ້ກໍານົດແລະຮັບຮອງເອົາ, ລວມທັງຄວາມພະຍາຍາມຂອງສະຫະປະຊາຊາດເຊັ່ນ: UNESCO ກໍານົດຈັນຍາບັນ AI ທີ່ເກືອບ 200 ປະເທດໄດ້ຮັບຮອງເອົາ, ເບິ່ງ. ການເຊື່ອມຕໍ່ທີ່ນີ້.

ນີ້ແມ່ນລາຍການຫຼັກທີ່ເປັນປະໂຫຍດຂອງມາດຖານ AI ດ້ານຈັນຍາບັນ ຫຼືຄຸນລັກສະນະກ່ຽວກັບລະບົບ AI ທີ່ຂ້າພະເຈົ້າໄດ້ຄົ້ນຫາຢ່າງໃກ້ຊິດກ່ອນຫນ້ານີ້:

  • ຄວາມ​ໂປ່ງ​ໃສ
  • ຄວາມຍຸຕິທຳ & ຄວາມຍຸດຕິທຳ
  • ຄວາມບໍ່ເປັນອັນຕະລາຍ
  • ຄວາມຮັບຜິດຊອບ
  • ຄວາມເປັນສ່ວນຕົວ
  • ຜົນປະໂຫຍດ
  • ເສລີພາບ & ການປົກຄອງຕົນເອງ
  • ຄວາມໄວ້ວາງໃຈ
  • ຄວາມຍືນຍົງ
  • ກຽດຕິຍົດ
  • ຄວາມສົມດຸນ

ຫຼັກການດ້ານຈັນຍາບັນຂອງ AI ເຫຼົ່ານັ້ນແມ່ນຄວນຈະຖືກນໍາໃຊ້ຢ່າງຈິງຈັງໂດຍຜູ້ພັດທະນາ AI, ພ້ອມກັບຜູ້ທີ່ຄຸ້ມຄອງຄວາມພະຍາຍາມໃນການພັດທະນາ AI, ແລະແມ່ນແຕ່ສິ່ງທີ່ສຸດທ້າຍໄດ້ປະຕິບັດແລະຮັກສາລະບົບ AI.

ພາກສ່ວນກ່ຽວຂ້ອງທັງຫມົດຕະຫຼອດວົງຈອນຊີວິດຂອງ AI ທັງຫມົດຂອງການພັດທະນາແລະການນໍາໃຊ້ແມ່ນພິຈາລະນາຢູ່ໃນຂອບເຂດຂອງການປະຕິບັດຕາມມາດຕະຖານທີ່ຖືກສ້າງຕັ້ງຂຶ້ນຂອງ AI ດ້ານຈັນຍາບັນ. ນີ້ແມ່ນຈຸດເດັ່ນທີ່ສໍາຄັນນັບຕັ້ງແຕ່ສົມມຸດຕິຖານປົກກະຕິແມ່ນວ່າ "ພຽງແຕ່ coders" ຫຼືຜູ້ທີ່ດໍາເນີນໂຄງການ AI ແມ່ນຂຶ້ນກັບການປະຕິບັດຕາມແນວຄິດຂອງຈັນຍາບັນ AI. ດັ່ງທີ່ໄດ້ເນັ້ນໜັກໄວ້ກ່ອນນີ້, ມັນຕ້ອງໃຊ້ເວລາບ້ານໜຶ່ງເພື່ອວາງແຜນ ແລະ ປະຕິບັດວຽກງານ AI, ແລະ ເພື່ອໃຫ້ໝູ່ບ້ານທັງໝົດຕ້ອງຮັບຮູ້ ແລະ ປະຕິບັດຕາມຫຼັກຈັນຍາບັນຂອງ AI.

ຂ້າ​ພະ​ເຈົ້າ​ຍັງ​ບໍ່​ດົນ​ມາ​ນີ້​ໄດ້​ກວດ​ກາ​ ກົດໝາຍວ່າດ້ວຍສິດທິຂອງ AI ເຊິ່ງເປັນຫົວຂໍ້ທີ່ເປັນທາງການຂອງເອກະສານທາງການຂອງລັດຖະບານສະຫະລັດທີ່ມີຊື່ວ່າ "Blueprint for a AI Bill of Rights: ເຮັດໃຫ້ລະບົບອັດຕະໂນມັດເຮັດວຽກສໍາລັບປະຊາຊົນອາເມລິກາ" ເຊິ່ງເປັນຜົນມາຈາກຄວາມພະຍາຍາມເປັນເວລາຫນຶ່ງປີໂດຍຫ້ອງການນະໂຍບາຍວິທະຍາສາດແລະເຕັກໂນໂລຢີ (OSTP. ). OSTP ເປັນໜ່ວຍງານຂອງລັດຖະບານກາງທີ່ໃຫ້ບໍລິການໃຫ້ຄໍາແນະນໍາແກ່ປະທານາທິບໍດີອາເມຣິກາ ແລະຫ້ອງການບໍລິຫານຂອງສະຫະລັດກ່ຽວກັບດ້ານເທັກໂນໂລຍີ, ວິທະຍາສາດ ແລະວິສະວະກຳຕ່າງໆ ທີ່ມີຄວາມສໍາຄັນແຫ່ງຊາດ. ​ໃນ​ຄວາມ​ໝາຍ​ດັ່ງກ່າວ, ທ່ານ​ສາມາດ​ເວົ້າ​ໄດ້​ວ່າ ຮ່າງ​ກົດໝາຍ​ວ່າ​ດ້ວຍ​ສິດທິ​ຂອງ AI ນີ້​ແມ່ນ​ເອກະສານ​ທີ່​ໄດ້​ຮັບ​ການ​ອະນຸມັດ​ແລະ​ຮັບຮອງ​ຈາກ​ທຳນຽບຂາວ​ຂອງ​ສະຫະລັດ​ທີ່​ມີ​ຢູ່​ແລ້ວ.

ໃນ​ບັນ​ຊີ​ລາຍ​ການ​ສິດ​ທິ AI, ມີ​ຫ້າ​ປະ​ເພດ​ແກນ​:

  • ລະບົບທີ່ປອດໄພແລະມີປະສິດທິພາບ
  • ການປົກປ້ອງການຈໍາແນກແບບວິທີ
  • ຄວາມເປັນສ່ວນຕົວຂອງຂໍ້ມູນ
  • ແຈ້ງ​ການ​ແລະ​ຄໍາ​ອະ​ທິ​ບາຍ​
  • ທາງເລືອກຂອງມະນຸດ, ການພິຈາລະນາ, ແລະການຫຼຸດລົງ

ຂ້າ​ພະ​ເຈົ້າ​ໄດ້​ທົບ​ທວນ​ຄືນ​ລະ​ມັດ​ລະ​ວັງ​ຄໍາ​ສັ່ງ​ເຫຼົ່າ​ນັ້ນ​, ເບິ່ງ​ ການເຊື່ອມຕໍ່ທີ່ນີ້.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of exploring the irksome matter of the ongoing massive trust recession and its impact on AI levels of trust.

Getting A Bigger Boat To Build Up Trust In AI

ໃຫ້ພວກເຮົາທົບທວນຄືນຄໍາຖາມທີ່ຕັ້ງໄວ້ກ່ອນຫນ້າຂອງຂ້ອຍກ່ຽວກັບຫົວຂໍ້ນີ້:

  • What can we do about the pervasive societal massive trust recession when it comes to AI?
  • Is AI doomed to rock-bottom basement-level trust, no matter what AI Ethics or AI Law does?
  • Should those in the AI field toss in the towel on AI trust altogether?

I’m going to take the optimistic route and argue that we can do something about this.

I would also vehemently say that we should not toss in the towel. The key instead is to work even harder, plus smarter, toward dealing with the trust in AI question. The part about being smarter entails realizing that we are in a massive trust recession and soberly taking that macroscopic looming factor into mindful account. Yes, for everything that we do during the fervent efforts to adopt and support AI Ethics and AI Law, be watchful of and adjust according to the falling tide of trust all told.

Before I go further into the optimistic or smiley face choice, I suppose it is only fair to offer the contrasting viewpoint. Okay, here you go. We cannot do anything about the massive trust recession. No point in trying to tilt at windmills, as they say. Thus, just keep fighting the fight, and whatever happens with the tide, so be it.

In that sad face scenario, you could suggest it is a shrugging of the shoulders and a capitulation that the tide is the tide. Someday, hopefully, the massive trust recession will weaken and become merely a normal form of a trust recession. Then, with a bit of luck, the trust recession will whimper out and trust will have returned. We might even end up with a booming sense of trust. A trust boom, as it were.

I’ll categorize your choices into the following five options:

1) The Unaware. These are those advocates in the AI Ethics and AI Law arena that don’t know there is a massive trust recession. They don’t even know that they don’t know.

2) The Know But Don’t Care. These are those advocates in AI Ethics and AI Law that know about the massive trust recession but shake it off. Ride it out, and do nothing else new.

3) The Know And Cope With It. These are those advocates in AI Ethics and AI Law that know about the massive trust recession and have opted to cope with it. They adjust their messaging; they adjust their approach. At times, this includes blending the trust recession into their strategies and efforts about furthering trust in AI and seeking the elusive Responsible AI.

4) The Know And Inadvertently Make Things Worse. These are those advocates in AI Ethics and AI Law that know about the massive trust recession, plus they have opted to do something about it, yet they end up shooting their own foot. By reacting improperly to the societal trend, they mistakenly worsen Responsible AI and drop trust in AI to even lower depths.

5) ອື່ນ ໆ (to be explained, momentarily)

Which of those five options are you in?

I purposely gave the fifth option for those of you who either don’t like any of the other four or that you genuinely believe there are other possibilities and that none of the ones listed adequately characterizes your position.

You don’t have to be shoehorned into any of the choices. I merely proffer the selections for the purpose of generating thoughtful discussion on the meritorious topic. We need to be talking about the massive trust recession, I believe. Not much in-depth analysis has yet occurred in the particulars of Responsible AI and Trustworthy AI endeavors as it relates to the societal massive trust recession.

Time to open those floodgates (alright, that’s maybe over-the-top on these puns and wordplay).

If you are wondering what a fifth option might consist of, here’s one that you might find of interest.

AI ການຍົກເວັ້ນ.

There is a contingent in the AI field that believes AI is an exception to the normal rules of things. These AI exceptionalism proponents assert that you cannot routinely apply other societal shenanigans to AI. AI isn’t impacted because it is a grandiose exception.

In that somewhat dogmatic viewpoint, my analogy of a tide and AI trust as a boat that is bobbing up and down would be tossed out the window as an analogous consideration. AI trust is bigger than the tide. No matter what happens in the massive trust recession, AI trust is going to go wherever it goes. If the tide goes up, AI trust might go up or might go down. If the tide goes down, AI trust might go up or might go down. Irrespective of the tide, AI trust has its own fate, its own destiny, its own path.

I’ve got another twist for you.

Some might contend that AI is going to materially impact the massive trust recession.

You see, the rest of this discussion has gotten things backward, supposedly. It isn’t that the massive trust recession is going to impact AI trust, instead, the reverse is true. Depending upon what we do about AI, the trust recession is potentially going to deepen or recover. AI trust will determine the fate of the tide. I guess you could assert that AI is so powerful as a potential force that it is akin to the sun, the moon, and the earth in determining how the tide is going to go.

If we get the AI trust aspects figured out, and if people trust in AI, maybe this will turn around the trust recession. People will shift their trust in all other respects of their lives. They will begin to increase their trust in government, businesses, leaders, and so on, all because of having ubiquitous trustworthy AI.

Farfetched?

ໂພດ, ອາດຈະບໍ່.

Without getting you into a gloomy mood, do realize that the opposite perspective about AI trust could also emerge. In that use case, we all fall into an utter lack of trust in AI. We become so distrustful that the distrust spills over into our already massive trust recession. In turn, this makes the massive trust recession become the super gigantic mega-massive trust recession, many times worse than we could ever imagine.

Dovetail this idea into the bandied-around notion of AI as an existential risk. If AI starts to seem as though the extremital risk is coming to fruition, namely that AI that is going to take over humankind and either enslave us or wipe us all out, you would certainly seem to have a solid argument for the massive trust recession taking a pretty dour downward spiral.

ຂ້ອຍໄດ້ຮັບສິ່ງນັ້ນ.

Anyway, let’s hope for the happier side of things, shall we?

ສະຫຼຸບ

ໃນປັດຈຸບັນທີ່ທ່ານຮູ້ກ່ຽວກັບ massive trust recession, what can you do regarding AI trust?

First, for those of you steeped in the AI Ethics and AI Law realm, make sure to calculate your AI ຮັບຜິດຊອບ and Trustworthy AI pursuits via the societal context associated with being in a trust recession. You should be careful in feeling dejected that your own efforts to boost trust in AI are seemingly hampered or less than fully successful as to what you expected to occur. It could be that your efforts are at least helping, meanwhile unbeknownst to you, the trust drainpipe is rub-a-dub usurping your valiant activity in a silent and sadly detrimental way. Do not despair. It could be that if the trust recession wasn’t underway, you would have seen tremendous advances and extraordinarily laudable results.

Second, we need to do more analyses on how to measure the trust recession and likewise how to measure the ups and down’s of trust in AI. Without having reliable and well-accepted metrics, across the board, we are blindly floating in an ocean where we don’t know how many fathoms we have lost or gained.

Third, consider ways to convey that trust in AI is being shaped by the massive trust recession. Few know of this. AI insiders ought to be doing some deep thinking on the topic. The public at large should also be brought up to speed. There are two messages to be conveyed. One is that there is a massive trust recession. Second, trust in AI is subject to the vagaries of the trust recession, and we have to explicitly take that into account.

As a final remark, for now, I imagine that you know the famous joke about the fish in a fishbowl.

ນີ້ແມ່ນວິທີທີ່ມັນໄປ.

Two fish are swimming back and forth in a fishbowl. Around and around, they go. Finally, one of the fish turns to the other one and says it is getting tired of being in the water. The other fish contemplates this comment. A few ponderous moments later, the mindful fish inquisitively replies, what in the heck is water?

It’s a bit of an old joke.

The emphasis is supposed to be that whatever surrounds you might not be readily recognizable. You become accustomed to it. It is just there. You do not notice it because it is everywhere and unremarkable as to its presence (I’ll mention as an aside that some cynics don’t like the joke since they insist that real fish do know they are indeed in water, and realize “cognitively” as such, including being able to leap out of the water into the air, etc.).

As a convenient fish tale or parable, we can use this handy dandy allegory to point out that we might not realize that we are in a massive trust recession. It is all around us, and we viscerally feel it, but we don’t consciously realize that it is here.

Time to take off the blinders.

Take a deep breath and breath in the fact that our massive trust recession exists. In turn, for those of you mightily striving day after day to foster Responsible AI and garner trust in AI, keep your eyes wide open as to how the trust recession is intervening in your valiant efforts.

As Shakespeare famously stated: “We must take the current when it serves, or lose our ventures.”

Source: https://www.forbes.com/sites/lanceeliot/2022/12/04/massively-brewing-trust-recession-aims-to-erode-responsible-ai-says-ai-ethics-and-ai-law/