ຈັນຍາບັນຂອງ AI ເປັນການເປີດເຜີຍທີ່ໜ້າຕົກໃຈວ່າ ການຝຶກອົບຮົມ AI ໃຫ້ມີພິດ ຫຼື ລຳອຽງ ອາດຈະເປັນປະໂຫຍດ, ລວມທັງສຳລັບລົດທີ່ຂັບລົດດ້ວຍຕົນເອງ.

Here’s an old line that I’m sure you’ve heard before.

ມັນໃຊ້ເວລາຫນຶ່ງທີ່ຈະຮູ້ຈັກຫນຶ່ງ.

You might not realize that this is an expression that can be traced to the early 1900s and was usually invoked when referring to wrongdoers (other variations of the catchphrase go back further such as to the 1600s). An example of how this utterance might be used entails the notion that if you wish to catch a thief then you need to use a thief to do so. This showcases the assertion that it takes one to know one. Many movies and TV shows have capitalized on this handy bit of sage wisdom, often portraying that the only viable means to nab a crook entailed hiring an equally corrupt crook to pursue the wrongdoer.

Shifting gears, some might leverage this same logic to argue that a suitable way to discern whether someone is embodying undue biases and discriminatory beliefs would be to find someone that already harbors such tendencies. Presumably, a person already filled with biases is going to be able to more readily sense that this other human is likewise filled to the brim with toxicity. Again, it takes one to know one is the avowed mantra.

Your initial reaction to the possibility of using a biased person to suss out another biased person might be one of skepticism and disbelief. Can’t we figure out whether someone holds untoward biases by merely examining them and not having to resort to finding someone else of a like nature? It would seem oddish to purposely seek to discover someone that is biased in order to uncover others that are also toxically biased.

I guess it partially depends on whether you are willing to accept the presumptive refrain that it takes one to know one. Note that this does not suggest that the only way to catch a thief requires that you exclusively and always make use of a thief. You could reasonably seem to argue that this is merely an added path that can be given due consideration. Maybe sometimes you are willing to entertain the possibility of using a thief to catch a thief, while other circumstances might make this an unfathomable tactic.

Use the right tool for the right setting, as they say.

Now that I’ve laid out those fundamentals, we can proceed into the perhaps unnerving and ostensibly shocking part of this tale.

ເຈົ້າ​ພ້ອມ​ແລ້ວ​ບໍ?

The field of AI is actively pursuing the same precept that it sometimes takes one to know one, particularly in the case of trying to ferret out AI that is biased or acting in a discriminatory manner. Yes, the mind-bending idea is that we might purposely want to devise AI that is fully and unabashedly biased and discriminatory, doing so in order to use this as a means to discover and uncover other AI that has that same semblance of toxicity. As you’ll see in a moment, there are a variety of vexing AI Ethics issues underlying the matter. For my overall ongoing and extensive coverage of AI Ethics and Ethical AI, see ການເຊື່ອມຕໍ່ທີ່ນີ້ ແລະ ການເຊື່ອມຕໍ່ທີ່ນີ້, ພຽງແຕ່ຊື່ບາງຄົນ.

I guess you could express this use of toxic AI to go after other toxic AI as the proverbial fighting fire-with-fire conception (we can invoke plenty of euphemisms and illustrative metaphors to depict this situation). Or, as already emphasized, we might parsimoniously refer to the assertion that it takes one to know one.

The overarching concept is that rather than only trying to figure out whether a given AI system contains undue biases by using conventional methods, maybe we should seek to employ less conventional means too. One such unconventional means would be to devise AI that contains all the worst of biases and societally unacceptable toxicities and then use this AI to aid in routing out other AI that has those same propensities of badness.

When you give this a quick thought, it certainly appears to be perfectly sensible. We could aim to build AI that is toxic to the max. This toxic AI is then used to ferret out other AI that also has toxicity. For the then revealed “bad” AI, we can deal with it by either undoing the toxicity, ditching the AI entirely (see my coverage of AI disgorgement or destruction at ລິ້ງນີ້ຢູ່ນີ້), or imprisoning the AI (see my coverage of AI confinement at ລິ້ງນີ້ຢູ່ນີ້), or do whatever else seems applicable to do.

A counterargument is that we ought to have our heads examined that we are intentionally and willingly devising AI that is toxic and filled with biases. This is the last thing we ought to ever consider, some would exhort. Focus on making AI consisting wholly of goodness. Do not focus on devising AI that has the evils and dregs of undue biases. The very notion of such a pursuit seems repulsive to some.

There are more qualms about this controversial quest.

Maybe a mission of devising toxic AI will merely embolden those that wish to craft AI that is able to undercut society. It is as though we are saying that crafting AI that has inappropriate and unsavory biases is perfectly fine. No worries, no hesitations. Seek to devise toxic AI to your heart’s content, we are loudly conveying out to AI builders all across the globe. It is (wink-wink) all in the name of goodness.

Furthermore, suppose this toxic AI kind of catches on. It could be that the AI is used and reused by lots of other AI builders. Eventually, the toxic AI gets hidden within all manner of AI systems. An analogy might be made to devising a human-undermining virus that escapes from a presumably sealed lab. The next thing you know, the darned thing is everywhere and we have wiped ourselves out.

Wait for a second, the counter to those counterarguments goes, you are running amok with all kinds of crazy and unsupported suppositions. Take a deep breath. Calm yourself.

We can safely make AI that is toxic and keep it confined. We can use the toxic AI to find and aid in reducing the increasing prevalence of AI that unfortunately does have undue biases. Any other of these preposterously wild and unsubstantiated snowballing exclamations are purely knee-jerk reactions and regrettably foolish and outrightly foolhardy. Do not try to throw out the baby with the bathwater, you are forewarned.

Think of it this way, the proponents contend. The proper building and use of toxic AI for purposes of research, assessment, and acting like a detective to uncover other societally offensive AI is a worthy approach and ought to get its fair shake at being pursued. Put aside your rash reactions. Come down to earth and look at this soberly. Our eye is on the prize, namely exposing and undoing the glut of biased-based AI systems and making sure that as a society we do not become overrun with toxic AI.

Period. Full stop.

There are various keystone ways to delve into this notion of utilizing toxic or biased AI for beneficial purposes, including:

  • Setup datasets that intentionally contain biased and altogether toxic data that can be used for training AI regarding what not to do and/or what to watch for
  • Use such datasets to train Machine Learning (ML) and Deep Learning (DL) models about detecting biases and figuring out computational patterns entailing societal toxicity
  • Apply the toxicity trained ML/DL toward other AI to ascertain whether the targeted AI is potentially biased and toxic
  • Make available toxicity trained ML/DL to showcase to AI builders what to watch out for so they can readily inspect models to see how algorithmically imbued biases arise
  • Exemplify the dangers of toxic AI as part of AI Ethics and Ethical AI awareness all told via this problem-child bad-to-the-bone AI series of exemplars
  • ອື່ນ ໆ

Before getting into the meat of those several paths, let’s establish some additional foundational particulars.

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we can set the stage by exploring what I mean when I speak of Machine Learning and Deep Learning.

ພາກສ່ວນຫນຶ່ງໂດຍສະເພາະຫຼືບາງສ່ວນຂອງຈັນຍາບັນ AI ທີ່ໄດ້ຮັບຄວາມສົນໃຈຈາກສື່ມວນຊົນຫຼາຍແມ່ນປະກອບດ້ວຍ AI ທີ່ສະແດງໃຫ້ເຫັນຄວາມບໍ່ສະເຫມີພາບແລະຄວາມບໍ່ສະເຫມີພາບ. ເຈົ້າອາດຈະຮູ້ວ່າເມື່ອຍຸກຫຼ້າສຸດຂອງ AI ກ້າວໄປສູ່ຄວາມກະຕືລືລົ້ນຢ່າງໃຫຍ່ຫຼວງສໍາລັບສິ່ງທີ່ບາງຄົນເອີ້ນວ່າໃນປັດຈຸບັນ. AI ສໍາລັບທີ່ດີ. ແຕ່ຫນ້າເສຍດາຍ, ໃນ heels ຂອງຄວາມຕື່ນເຕັ້ນທີ່ gushing ນັ້ນ, ພວກເຮົາໄດ້ເລີ່ມຕົ້ນເປັນພະຍານ AI ສໍາລັບບໍ່ດີ. ຕົວຢ່າງເຊັ່ນ, ລະບົບການຮັບຮູ້ໃບຫນ້າທີ່ອີງໃສ່ AI ຕ່າງໆໄດ້ຖືກເປີດເຜີຍວ່າປະກອບດ້ວຍຄວາມລໍາອຽງທາງເຊື້ອຊາດແລະຄວາມລໍາອຽງທາງເພດ, ເຊິ່ງຂ້າພະເຈົ້າໄດ້ສົນທະນາຢູ່. ການເຊື່ອມຕໍ່ທີ່ນີ້.

ຄວາມພະຍາຍາມເພື່ອຕໍ່ສູ້ກັບຄືນ AI ສໍາລັບບໍ່ດີ ກໍາລັງດໍາເນີນຢ່າງຫ້າວຫັນ. ນອກ​ຈາກ vociferous​ ທາງດ້ານກົດຫມາຍ ການ​ສະ​ແຫວ​ງຫາ​ການ​ຍຶດໝັ້ນ​ໃນ​ການ​ກະທຳ​ຜິດ, ຍັງ​ມີ​ການ​ຊຸກຍູ້​ອັນ​ສຳຄັນ​ໄປ​ສູ່​ການ​ຖື​ສິນ​ທຳ AI ​ເພື່ອ​ແກ້​ໄຂ​ຄວາມ​ຊົ່ວ​ຮ້າຍ​ຂອງ AI. ແນວຄວາມຄິດແມ່ນວ່າພວກເຮົາຄວນຈະຮັບຮອງເອົາແລະຮັບຮອງຫຼັກການ AI ດ້ານຈັນຍາບັນທີ່ສໍາຄັນສໍາລັບການພັດທະນາແລະພາກສະຫນາມຂອງ AI ເຮັດແນວນັ້ນເພື່ອຫຼຸດຜ່ອນການ. AI ສໍາລັບບໍ່ດີ ພ້ອມ​ກັນ​ນັ້ນ ​ໄດ້​ປະກາດ ​ແລະ ສົ່ງ​ເສີມ​ຄວາມ​ນິຍົມ AI ສໍາລັບທີ່ດີ.

ກ່ຽວກັບແນວຄິດທີ່ກ່ຽວຂ້ອງ, ຂ້າພະເຈົ້າເປັນຜູ້ສະຫນັບສະຫນູນຂອງຄວາມພະຍາຍາມທີ່ຈະໃຊ້ AI ເປັນສ່ວນຫນຶ່ງຂອງການແກ້ໄຂບັນຫາ AI, ຕໍ່ສູ້ກັບໄຟດ້ວຍໄຟໃນລັກສະນະທີ່ຄິດ. ສໍາລັບຕົວຢ່າງ, ພວກເຮົາອາດຈະຝັງອົງປະກອບ AI ດ້ານຈັນຍາບັນເຂົ້າໄປໃນລະບົບ AI ທີ່ຈະກວດສອບວ່າສ່ວນທີ່ເຫຼືອຂອງ AI ກໍາລັງເຮັດຫຍັງແລະດັ່ງນັ້ນຈຶ່ງສາມາດຈັບໄດ້ໃນເວລາທີ່ແທ້ຈິງຄວາມພະຍາຍາມຈໍາແນກໃດໆ, ເບິ່ງການສົນທະນາຂອງຂ້ອຍທີ່ ການເຊື່ອມຕໍ່ທີ່ນີ້. ພວກເຮົາຍັງສາມາດມີລະບົບ AI ແຍກຕ່າງຫາກທີ່ເຮັດຫນ້າທີ່ເປັນປະເພດຂອງ AI Ethics monitor. ລະບົບ AI ເຮັດຫນ້າທີ່ເປັນຜູ້ເບິ່ງແຍງເພື່ອຕິດຕາມແລະກວດພົບວ່າ AI ອື່ນກໍາລັງເຂົ້າໄປໃນເຫວເລິກທີ່ບໍ່ມີຈັນຍາບັນ (ເບິ່ງການວິເຄາະຂອງຂ້ອຍກ່ຽວກັບຄວາມສາມາດດັ່ງກ່າວຢູ່ທີ່ ການເຊື່ອມຕໍ່ທີ່ນີ້).

ໃນເວລານີ້, ຂ້ອຍຈະແບ່ງປັນຫຼັກການພື້ນຖານກ່ຽວກັບຈັນຍາບັນຂອງ AI ໃຫ້ກັບເຈົ້າ. ມີຫຼາຍຊະນິດຂອງລາຍຊື່ທີ່ລອຍຢູ່ອ້ອມຮອບນີ້ ແລະບ່ອນນັ້ນ. ເຈົ້າສາມາດເວົ້າໄດ້ວ່າຍັງບໍ່ທັນມີບັນຊີລາຍຊື່ທີ່ເປັນເອກກະພາບຂອງການອຸທອນ ແລະ ເອກະພາບກັນ. ນັ້ນແມ່ນຂ່າວທີ່ໂຊກບໍ່ດີ. ຂ່າວດີແມ່ນວ່າຢ່າງຫນ້ອຍມີບັນຊີລາຍຊື່ຈັນຍາບັນ AI ທີ່ມີຢູ່ພ້ອມແລ້ວແລະພວກມັນມັກຈະຄ້າຍຄືກັນ. ທັງຫມົດທີ່ບອກ, ນີ້ຊີ້ໃຫ້ເຫັນວ່າໂດຍຮູບແບບຂອງການລວມກັນທີ່ສົມເຫດສົມຜົນຂອງການຈັດລຽງທີ່ພວກເຮົາກໍາລັງຊອກຫາວິທີການຂອງພວກເຮົາໄປສູ່ຄວາມທໍາມະດາທົ່ວໄປຂອງສິ່ງທີ່ AI ຈັນຍາບັນປະກອບດ້ວຍ.

ກ່ອນອື່ນ, ໃຫ້ພວກເຮົາກວມເອົາບາງຂໍ້ສັ້ນໆກ່ຽວກັບຈັນຍາບັນຂອງ AI ໂດຍລວມເພື່ອສະແດງໃຫ້ເຫັນເຖິງສິ່ງທີ່ຄວນຈະເປັນການພິຈາລະນາທີ່ສໍາຄັນສໍາລັບທຸກຄົນທີ່ເຮັດເຄື່ອງຫັດຖະກໍາ, ພາກສະຫນາມ, ຫຼືການນໍາໃຊ້ AI.

ສໍາລັບການຍົກຕົວຢ່າງ, ດັ່ງທີ່ໄດ້ກ່າວໂດຍ Vatican ໃນ Rome ຮຽກຮ້ອງຈັນຍາບັນ AI ແລະດັ່ງທີ່ຂ້າພະເຈົ້າໄດ້ກວມເອົາໃນຄວາມເລິກຢູ່ທີ່ ການເຊື່ອມຕໍ່ທີ່ນີ້ເຫຼົ່ານີ້ແມ່ນຫຼັກການພື້ນຖານດ້ານຈັນຍາບັນ AI ຫົກຂໍ້ທີ່ໄດ້ລະບຸໄວ້ຂອງເຂົາເຈົ້າ:

  • ຄວາມສະຫວ່າງ: ໃນຫຼັກການ, ລະບົບ AI ຕ້ອງໄດ້ຮັບການອະທິບາຍ
  • ລວມ: ຄວາມຕ້ອງການຂອງມະນຸດທັງຫມົດຕ້ອງໄດ້ຮັບການພິຈາລະນາເພື່ອໃຫ້ທຸກຄົນໄດ້ຮັບຜົນປະໂຫຍດ, ແລະບຸກຄົນທັງຫມົດສາມາດໄດ້ຮັບການສະເຫນີເງື່ອນໄຂທີ່ດີທີ່ສຸດເພື່ອສະແດງອອກແລະພັດທະນາ.
  • ຄວາມຮັບຜິດຊອບ: ຜູ້ທີ່ອອກແບບ ແລະ ນຳໃຊ້ AI ຕ້ອງດຳເນີນໄປດ້ວຍຄວາມຮັບຜິດຊອບ ແລະ ຄວາມໂປ່ງໃສ
  • ບໍ່ ລຳ ອຽງ: ຫ້າມ​ສ້າງ​ຫຼື​ກະທຳ​ຕາມ​ຄວາມ​ລຳອຽງ, ​ເປັນ​ການ​ປົກ​ປ້ອງ​ຄວາມ​ຍຸຕິ​ທຳ ​ແລະ ກຽດ​ສັກ​ສີ​ຂອງ​ມະນຸດ
  • ຄວາມຫນ້າເຊື່ອຖື: ລະບົບ AI ຈະຕ້ອງສາມາດເຮັດວຽກໄດ້ຢ່າງຫນ້າເຊື່ອຖື
  • ຄວາມ​ປອດ​ໄພ​ແລະ​ຄວາມ​ເປັນ​ສ່ວນ​ຕົວ​: ລະບົບ AI ຕ້ອງເຮັດວຽກຢ່າງປອດໄພແລະເຄົາລົບຄວາມເປັນສ່ວນຕົວຂອງຜູ້ໃຊ້.

ດັ່ງທີ່ໄດ້ກ່າວໂດຍກະຊວງປ້ອງກັນປະເທດສະຫະລັດ (DoD) ໃນຂອງພວກເຂົາ ຫຼັກການດ້ານຈັນຍາບັນສໍາລັບການນໍາໃຊ້ປັນຍາປະດິດ ແລະດັ່ງທີ່ຂ້າພະເຈົ້າໄດ້ກວມເອົາໃນຄວາມເລິກຢູ່ທີ່ ການເຊື່ອມຕໍ່ທີ່ນີ້, ນີ້ແມ່ນຫົກຫຼັກການຫຼັກຈັນຍາບັນ AI ຂອງພວກເຂົາ:

  • ຮັບຜິດຊອບ: ບຸກຄະລາກອນ DoD ຈະໃຊ້ລະດັບການຕັດສິນ ແລະການດູແລທີ່ເໝາະສົມ ໃນຂະນະທີ່ຍັງຮັບຜິດຊອບຕໍ່ການພັດທະນາ, ການນຳໃຊ້ ແລະການນຳໃຊ້ຄວາມສາມາດຂອງ AI.
  • ສະເໝີພາບ: ພະແນກຈະດໍາເນີນຂັ້ນຕອນໂດຍເຈດຕະນາເພື່ອຫຼຸດຜ່ອນຄວາມລໍາອຽງທີ່ບໍ່ໄດ້ຕັ້ງໃຈໃນຄວາມສາມາດ AI.
  • ສາມາດຕິດຕາມໄດ້: ຄວາມສາມາດຂອງ AI ຂອງພະແນກຈະຖືກພັດທະນາແລະນໍາໃຊ້ເພື່ອໃຫ້ບຸກຄະລາກອນທີ່ກ່ຽວຂ້ອງມີຄວາມເຂົ້າໃຈທີ່ເຫມາະສົມກ່ຽວກັບເຕັກໂນໂລຢີ, ຂະບວນການພັດທະນາ, ແລະວິທີການປະຕິບັດງານທີ່ໃຊ້ກັບຄວາມສາມາດຂອງ AI, ລວມທັງວິທີການທີ່ໂປ່ງໃສແລະສາມາດກວດສອບໄດ້, ແຫຼ່ງຂໍ້ມູນ, ແລະຂັ້ນຕອນການອອກແບບແລະເອກະສານ.
  • ທີ່ເຊື່ອຖືໄດ້: ຄວາມສາມາດຂອງ AI ຂອງພະແນກຈະມີການນໍາໃຊ້ທີ່ຊັດເຈນ, ຖືກກໍານົດໄວ້ດີ, ແລະຄວາມປອດໄພ, ຄວາມປອດໄພ, ແລະປະສິດທິພາບຂອງຄວາມສາມາດດັ່ງກ່າວຈະຂຶ້ນກັບການທົດສອບແລະການຮັບປະກັນພາຍໃນການນໍາໃຊ້ທີ່ກໍານົດໄວ້ໃນທົ່ວວົງຈອນຊີວິດຂອງເຂົາເຈົ້າທັງຫມົດ.
  • ປົກຄອງ: ພະແນກຈະອອກແບບແລະວິສະວະກອນຄວາມສາມາດຂອງ AI ເພື່ອປະຕິບັດຫນ້າທີ່ຕັ້ງໃຈຂອງເຂົາເຈົ້າໃນຂະນະທີ່ມີຄວາມສາມາດໃນການກວດພົບແລະຫຼີກເວັ້ນຜົນສະທ້ອນທີ່ບໍ່ໄດ້ຕັ້ງໃຈ, ແລະຄວາມສາມາດໃນການຍົກເລີກຫຼືປິດການໃຊ້ງານລະບົບທີ່ສະແດງໃຫ້ເຫັນເຖິງພຶດຕິກໍາທີ່ບໍ່ໄດ້ຕັ້ງໃຈ.

ຂ້າພະເຈົ້າຍັງໄດ້ປຶກສາຫາລືກ່ຽວກັບການວິເຄາະລວມຕ່າງໆກ່ຽວກັບຫຼັກຈັນຍາບັນຂອງ AI, ລວມທັງໄດ້ກວມເອົາຊຸດທີ່ສ້າງຂຶ້ນໂດຍນັກຄົ້ນຄວ້າທີ່ໄດ້ກວດກາແລະ condensed ຄວາມສໍາຄັນຂອງຫຼັກຈັນຍາບັນ AI ລະດັບຊາດແລະສາກົນຈໍານວນຫລາຍໃນເອກະສານທີ່ມີຫົວຂໍ້ "ພູມສັນຖານທົ່ວໂລກຂອງຄໍາແນະນໍາດ້ານຈັນຍາບັນ AI" (ຈັດພີມມາ. ໃນ ລັກສະນະ), ແລະວ່າການຄຸ້ມຄອງຂອງຂ້ອຍຂຸດຄົ້ນຢູ່ ການເຊື່ອມຕໍ່ທີ່ນີ້, ຊຶ່ງນໍາໄປສູ່ບັນຊີລາຍຊື່ສໍາຄັນນີ້:

  • ຄວາມ​ໂປ່ງ​ໃສ
  • ຄວາມຍຸຕິທຳ & ຄວາມຍຸດຕິທຳ
  • ຄວາມບໍ່ເປັນອັນຕະລາຍ
  • ຄວາມຮັບຜິດຊອບ
  • ຄວາມເປັນສ່ວນຕົວ
  • ຜົນປະໂຫຍດ
  • ເສລີພາບ & ການປົກຄອງຕົນເອງ
  • ຄວາມໄວ້ວາງໃຈ
  • ຄວາມຍືນຍົງ
  • ກຽດຕິຍົດ
  • ຄວາມສົມດຸນ

ດັ່ງທີ່ເຈົ້າອາດຈະເດົາໄດ້ໂດຍກົງ, ການພະຍາຍາມປັກໝຸດສະເພາະກ່ຽວກັບຫຼັກການເຫຼົ່ານີ້ສາມາດເຮັດໄດ້ຍາກທີ່ສຸດ. ຍິ່ງໄປກວ່ານັ້ນ, ຄວາມພະຍາຍາມທີ່ຈະປ່ຽນຫຼັກການອັນກວ້າງໃຫຍ່ເຫຼົ່ານັ້ນໃຫ້ກາຍເປັນສິ່ງທີ່ເຫັນໄດ້ຊັດເຈນ ແລະ ລະອຽດພໍທີ່ຈະໃຊ້ໃນເວລາທີ່ການສ້າງລະບົບ AI ຍັງເປັນໝາກໄມ້ທີ່ຫຍຸ້ງຍາກທີ່ຈະແຕກ. ມັນເປັນເລື່ອງງ່າຍທີ່ຈະເຮັດໂດຍລວມກ່ຽວກັບສິ່ງທີ່ AI Ethics precepts ແລະວິທີການທີ່ເຂົາເຈົ້າຄວນຈະໄດ້ຮັບການສັງເກດເຫັນໂດຍທົ່ວໄປ, ໃນຂະນະທີ່ມັນເປັນສະຖານະການທີ່ສັບສົນຫຼາຍໃນ AI coding ຈະຕ້ອງເປັນຢາງທີ່ແທ້ຈິງທີ່ຕອບສະຫນອງຖະຫນົນຫົນທາງ.

ຫຼັກການດ້ານຈັນຍາບັນຂອງ AI ຈະຖືກ ນຳ ໃຊ້ໂດຍຜູ້ພັດທະນາ AI, ພ້ອມກັບຜູ້ທີ່ຄຸ້ມຄອງຄວາມພະຍາຍາມໃນການພັດທະນາ AI, ແລະແມ່ນແຕ່ສິ່ງທີ່ສຸດທ້າຍໄດ້ປະຕິບັດແລະຮັກສາລະບົບ AI. ພາກສ່ວນກ່ຽວຂ້ອງທັງໝົດຕະຫຼອດວົງຈອນຊີວິດ AI ຂອງການພັດທະນາ ແລະການນຳໃຊ້ທັງໝົດແມ່ນພິຈາລະນາຢູ່ໃນຂອບເຂດຂອງການປະຕິບັດຕາມມາດຕະຖານທີ່ສ້າງຂຶ້ນຂອງ AI ດ້ານຈັນຍາບັນ. ນີ້ແມ່ນຈຸດເດັ່ນທີ່ສໍາຄັນນັບຕັ້ງແຕ່ສົມມຸດຕິຖານປົກກະຕິແມ່ນວ່າ "ພຽງແຕ່ coders" ຫຼືຜູ້ທີ່ດໍາເນີນໂຄງການ AI ແມ່ນຂຶ້ນກັບການປະຕິບັດຕາມແນວຄິດຂອງຈັນຍາບັນ AI. ດັ່ງທີ່ໄດ້ກ່າວກ່ອນຫນ້ານີ້, ມັນໃຊ້ເວລາຫນຶ່ງບ້ານເພື່ອວາງແຜນແລະພາກສະຫນາມ AI, ແລະສໍາລັບບ້ານທັງຫມົດຕ້ອງໄດ້ຮັບການ versed ໃນແລະປະຕິບັດຕາມກົດລະບຽບຈັນຍາບັນຂອງ AI.

ໃຫ້ແນ່ໃຈວ່າພວກເຮົາຢູ່ໃນຫນ້າດຽວກັນກ່ຽວກັບລັກສະນະຂອງ AI ໃນມື້ນີ້.

ບໍ່ມີ AI ໃດໆໃນມື້ນີ້ທີ່ມີຄວາມຮູ້ສຶກ. ພວກເຮົາບໍ່ມີອັນນີ້. ພວກເຮົາບໍ່ຮູ້ວ່າ AI sentient ຈະເປັນໄປໄດ້ຫຼືບໍ່. ບໍ່ມີໃຜສາມາດຄາດເດົາໄດ້ຢ່າງຖືກຕ້ອງວ່າພວກເຮົາຈະບັນລຸ AI ທີ່ມີຄວາມຮູ້ສຶກ, ຫຼືວ່າ AI ທີ່ມີຄວາມຮູ້ສຶກຈະເກີດຂື້ນຢ່າງມະຫັດສະຈັນໂດຍທໍາມະຊາດໃນຮູບແບບຂອງ supernova ທາງດ້ານສະຕິປັນຍາຄອມພິວເຕີ້ (ໂດຍປົກກະຕິເອີ້ນວ່າເປັນຄໍາດຽວ, ເບິ່ງການຄຸ້ມຄອງຂອງຂ້ອຍຢູ່ທີ່ ການເຊື່ອມຕໍ່ທີ່ນີ້).

ປະເພດຂອງ AI ທີ່ຂ້ອຍກໍາລັງສຸມໃສ່ປະກອບດ້ວຍ AI ທີ່ບໍ່ມີຄວາມຮູ້ສຶກທີ່ພວກເຮົາມີໃນມື້ນີ້. ຖ້າ ຫາກ ວ່າ ພວກ ເຮົາ ຕ້ອງ ການ ຢາກ wildly ຄາດ ຄະ ເນ ກ່ຽວ ກັບ ຄົນທີ່ມີຄວາມຮູ້ສຶກ AI, ການສົນທະນານີ້ສາມາດໄປໃນທິດທາງທີ່ແຕກຕ່າງກັນຢ່າງຫຼວງຫຼາຍ. AI ທີ່ມີຄວາມຮູ້ສຶກທີ່ສົມມຸດວ່າຈະເປັນຄຸນນະພາບຂອງມະນຸດ. ທ່ານ ຈຳ ເປັນຕ້ອງພິຈາລະນາວ່າ AI ທີ່ມີສະຕິປັນຍາແມ່ນຄວາມຮັບຮູ້ທຽບເທົ່າກັບມະນຸດ. ຫຼາຍກວ່ານັ້ນ, ຍ້ອນວ່າບາງຄົນຄາດຄະເນວ່າພວກເຮົາອາດຈະມີ AI ອັດສະລິຍະສູງ, ມັນເປັນໄປໄດ້ວ່າ AI ດັ່ງກ່າວສາມາດສິ້ນສຸດໄດ້ສະຫລາດກວ່າມະນຸດ (ສໍາລັບການສໍາຫຼວດຂອງຂ້ອຍກ່ຽວກັບ AI ອັດສະລິຍະສູງສຸດ, ເບິ່ງ. ການຄຸ້ມຄອງຢູ່ທີ່ນີ້).

ຂໍໃຫ້ເຮົາເກັບສິ່ງຕ່າງໆລົງມາສູ່ໂລກ ແລະພິຈາລະນາ AI ທີ່ບໍ່ມີຄວາມຮູ້ສຶກໃນຄຳນວນຂອງມື້ນີ້.

ຮັບ​ຮູ້​ວ່າ AI ໃນ​ທຸກ​ມື້​ນີ້​ບໍ່​ສາ​ມາດ “ຄິດ” ໃນ​ແບບ​ໃດ​ກໍ​ຕາມ​ເທົ່າ​ກັບ​ການ​ຄິດ​ຂອງ​ມະ​ນຸດ. ໃນເວລາທີ່ທ່ານພົວພັນກັບ Alexa ຫຼື Siri, ຄວາມສາມາດໃນການສົນທະນາອາດຈະເບິ່ງຄືວ່າຄ້າຍຄືກັບຄວາມສາມາດຂອງມະນຸດ, ແຕ່ຄວາມຈິງແລ້ວມັນແມ່ນການຄິດໄລ່ແລະຂາດສະຕິປັນຍາຂອງມະນຸດ. ຍຸກຫຼ້າສຸດຂອງ AI ໄດ້ນຳໃຊ້ຢ່າງກ້ວາງຂວາງຂອງ Machine Learning (ML) ແລະ Deep Learning (DL), ເຊິ່ງນຳໃຊ້ການຈັບຄູ່ຮູບແບບການຄຳນວນ. ນີ້ໄດ້ນໍາໄປສູ່ລະບົບ AI ທີ່ມີລັກສະນະຂອງ proclivities ຄ້າຍຄືມະນຸດ. ໃນຂະນະດຽວກັນ, ບໍ່ມີ AI ໃດໆໃນມື້ນີ້ທີ່ມີລັກສະນະຂອງຄວາມຮູ້ສຶກທົ່ວໄປແລະບໍ່ມີຄວາມປະຫລາດໃຈທາງດ້ານສະຕິປັນຍາຂອງຄວາມຄິດຂອງມະນຸດທີ່ເຂັ້ມແຂງ.

ML/DL ແມ່ນຮູບແບບການຈັບຄູ່ຮູບແບບການຄິດໄລ່. ວິທີການປົກກະຕິແມ່ນວ່າທ່ານລວບລວມຂໍ້ມູນກ່ຽວກັບວຽກງານການຕັດສິນໃຈ. ທ່ານປ້ອນຂໍ້ມູນເຂົ້າໄປໃນຕົວແບບຄອມພິວເຕີ ML/DL. ແບບຈໍາລອງເຫຼົ່ານັ້ນຊອກຫາຮູບແບບທາງຄະນິດສາດ. ຫຼັງຈາກຊອກຫາຮູບແບບດັ່ງກ່າວ, ຖ້າພົບແລ້ວ, ລະບົບ AI ຈະໃຊ້ຮູບແບບເຫຼົ່ານັ້ນເມື່ອພົບກັບຂໍ້ມູນໃຫມ່. ຫຼັງຈາກການນໍາສະເຫນີຂໍ້ມູນໃຫມ່, ຮູບແບບທີ່ອີງໃສ່ "ເກົ່າ" ຫຼືຂໍ້ມູນປະຫວັດສາດຖືກນໍາໃຊ້ເພື່ອສະແດງການຕັດສິນໃຈໃນປະຈຸບັນ.

ຂ້ອຍຄິດວ່າເຈົ້າສາມາດເດົາໄດ້ວ່ານີ້ໄປໃສ. ຖ້າມະນຸດທີ່ເຮັດຕາມແບບຢ່າງໃນການຕັດສິນໃຈນັ້ນໄດ້ລວມເອົາຄວາມລຳອຽງທີ່ບໍ່ເຂົ້າກັບຄວາມລຳອຽງ, ຄວາມຜິດຫວັງແມ່ນວ່າຂໍ້ມູນສະທ້ອນເຖິງເລື່ອງນີ້ໃນທາງທີ່ລະອຽດອ່ອນ ແຕ່ມີຄວາມສຳຄັນ. ການຈັບຄູ່ຮູບແບບການຄິດໄລ່ຂອງເຄື່ອງຈັກ ຫຼືການຮຽນຮູ້ເລິກເລິກພຽງແຕ່ຈະພະຍາຍາມເຮັດແບບເລກຄະນິດສາດຕາມຄວາມເໝາະສົມ. ບໍ່ມີຄວາມຮູ້ສຶກທົ່ວໄປຫຼືລັກສະນະຄວາມຮູ້ສຶກອື່ນໆຂອງການສ້າງແບບຈໍາລອງທີ່ເຮັດດ້ວຍ AI ຕໍ່ຄົນ.

ຍິ່ງໄປກວ່ານັ້ນ, ນັກພັດທະນາ AI ອາດຈະບໍ່ຮັບຮູ້ສິ່ງທີ່ ກຳ ລັງເກີດຂື້ນ. ຄະນິດສາດ Arcane ໃນ ML/DL ອາດຈະເຮັດໃຫ້ມັນຍາກທີ່ຈະ ferret ອອກຄວາມລໍາອຽງທີ່ເຊື່ອງໄວ້ໃນປັດຈຸບັນ. ເຈົ້າຈະຫວັງຢ່າງຖືກຕ້ອງແລະຄາດຫວັງວ່າຜູ້ພັດທະນາ AI ຈະທົດສອບຄວາມລໍາອຽງທີ່ອາດຈະຖືກຝັງໄວ້, ເຖິງແມ່ນວ່ານີ້ແມ່ນ trickier ກວ່າທີ່ມັນອາດຈະເບິ່ງຄືວ່າ. ໂອກາດອັນແຂງແກ່ນມີຢູ່ວ່າເຖິງແມ່ນວ່າຈະມີການທົດສອບຢ່າງກວ້າງຂວາງວ່າຈະມີອະຄະຕິທີ່ຍັງຝັງຢູ່ໃນຮູບແບບການຈັບຄູ່ຮູບແບບຂອງ ML/DL.

ທ່ານສາມາດນໍາໃຊ້ຄໍາສຸພາສິດທີ່ມີຊື່ສຽງຫຼືບໍ່ມີຊື່ສຽງຂອງຂີ້ເຫຍື້ອໃນຂີ້ເຫຍື້ອ. ສິ່ງທີ່ເປັນ, ນີ້ແມ່ນຄ້າຍຄືກັນກັບຄວາມລໍາອຽງໃນ insidiously ໄດ້ຮັບ infused ເປັນຄວາມລໍາອຽງ submerged ພາຍໃນ AI ໄດ້. ການຕັດສິນໃຈຂອງສູດການຄິດໄລ່ (ADM) ຂອງ AI axiomatically ກາຍເປັນບັນຫາທີ່ບໍ່ສະເຫມີພາບ.

ບໍ່​ດີ.

What else can be done about all of this?

Let’s return to the earlier posited list of how to try and cope with AI biases or toxic AI by using a somewhat unconventional “it takes one to know one” approach. Recall that the list consisted of these essential points:

  • Setup datasets that intentionally contain biased and altogether toxic data that can be used for training AI regarding what not to do and/or what to watch for
  • Use such datasets to train Machine Learning (ML) and Deep Learning (DL) models about detecting biases and figuring out computational patterns entailing societal toxicity
  • Apply the toxicity trained ML/DL toward other AI to ascertain whether the targeted AI is potentially biased and toxic
  • Make available toxicity trained ML/DL to showcase to AI builders what to watch out for so they can readily inspect models to see how algorithmically imbued biases arise
  • Exemplify the dangers of toxic AI as part of AI Ethics and Ethical AI awareness all told via this problem-child bad-to-the-bone series of AI exemplars
  • ອື່ນ ໆ

We shall take a close-up look at the first of those salient points.

Setting Up Datasets Of Toxic Data

An insightful example of trying to establish datasets that contain unsavory societal biases is the CivilComments dataset of the WILDS curated collection.

First, some quick background.

WILDS is an open-source collection of datasets that can be used for training ML/DL. The primary stated purpose for WILDS is that it allows AI developers to have ready access to data that represents distribution shifts in various specific domains. Some of the domains currently available encompass areas such as animal species, tumors in living tissues, wheat head density, and other domains such as the CivilComments that I’ll be describing momentarily.

Dealing with distribution shifts is a crucial part of properly crafting AI ML/DL systems. Here’s the deal. Sometimes the data you use for training turns out to be quite different from the testing or “in the wild” data and thus your presumably trained ML/DL is adrift of what the real world is going to be like. Astute AI builders should be training their ML/DL to cope with such distribution shifts. This ought to be done upfront and not somehow be a surprise that later on requires a revamping of the ML/DL per se.

As explained in the paper that introduced WILDS: “Distribution shifts — where the training distribution differs from the test distribution — can substantially degrade the accuracy of machine learning (ML) systems deployed in the wild. Despite their ubiquity in the real-world deployments, these distribution shifts are under-represented in the datasets widely used in the ML community today. To address this gap, we present WILDS, a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts that naturally arise in real-world applications, such as shifts across hospitals for tumor identification; across camera traps for wildlife monitoring; and across time and location in satellite imaging and poverty mapping” (in the paper entitled “WILDS: A Benchmark of in-the-Wild Distribution Shifts” by Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Xie, Marvin Zhang, Ashay Balsubramani, Weihua Hu, and others).

The number of such WILDS datasets continues to increase and the nature of the datasets is generally being enhanced to bolster the value of using the data for ML/DL training.

The CivilComments dataset is described this way: “Automatic review of user-generated text—e.g., detecting toxic comments—is an important tool for moderating the sheer volume of text written on the Internet. Unfortunately, prior work has shown that such toxicity classifiers pick up on biases in the training data and spuriously associate toxicity with the mention of certain demographics. These types of spurious correlations can significantly degrade model performance on particular subpopulations. We study this issue through a modified variant of the CivilComments dataset” (as posted on the WILDS website).

Consider the nuances of untoward online postings.

You’ve undoubtedly encountered toxic comments when using nearly any kind of social media. It would seem nearly impossible for you to magically avoid seeing the acrid and abysmal content that seems to be pervasive these days. Sometimes the vulgar material is subtle and perhaps you have to read between the lines to get the gist of the biased or discriminatory tone or meaning. In other instances, the words are blatantly toxic and you do not need a microscope or a special decoder ring to figure out what the passages entail.

CivilComments is a dataset that was put together to try and devise AI ML/DL that can computationally detect toxic content. Here’s what the researchers underlying the effort focused on: “Unintended bias in Machine Learning can manifest as systemic differences in performance for different demographic groups, potentially compounding existing challenges to fairness in society at large. In this paper, we introduce a suite of threshold-agnostic metrics that provide a nuanced view of this unintended bias, by considering the various ways that a classifier’s score distribution can vary across designated groups. We also introduce a large new test set of online comments with crowd-sourced annotations for identity references. We use this to show how our metrics can be used to find new and potentially subtle unintended bias in existing public models” (in a paper entitled “Nuanced Metrics For Measuring Unintended Bias With Real Data for Test Classification” by Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman).

If you give this matter some broad contemplative thinking, you might begin to wonder how in the world can you discern what is a toxic comment versus what is not a toxic comment. Humans can radically differ as to what they construe as outright toxic wording. One person might be outraged at a particular online remark or comment that is posted on social media, while someone else might not be stirred at all. An argument is often made that the notion of toxic commentary is a wholly vague precept. It is like art, whereby art is customarily said to be understood only in the eye of the beholder, and likewise, biased or toxic remarks are only in the eye of the beholder too.

Balderdash, some retort. Anyone of a reasonable mind can suss out whether an online remark is toxic or not. You do not need to be a rocket scientist to realize when some posted caustic insult is filled with biases and hatred.

Of course, societal mores shift and change over periods of time. What might not have been perceived as offensive a while ago can be seen as abhorrently wrong today. On top of that, things said years ago that were once seen as unduly biased might be reinterpreted in light of changes in meanings. Meanwhile, others assert that toxic commentary is always toxic, no matter when it was initially promulgated. It could be contended that toxicity is not relative but instead is absolute.

The matter of trying to establish what is toxic can nonetheless be quite a difficult conundrum. We can double down on this troublesome matter as to trying to devise algorithms or AI that can ascertain which is which. If humans have a difficult time making such assessments, programming a computer is likely equally or more so problematic, some say.

One approach to setting up datasets that contain toxic content involves using a crowdsourcing method to rate or assess the contents, ergo providing a human-based means of determining what is viewed as untoward and including the labeling within the dataset itself. An AI ML/DL might then inspect the data and the associated labeling that has been indicated by human raters. This in turn can potentially serve as a means of computationally finding underlying mathematical patterns. Voila, the ML/DL then might be able to anticipate or computationally assess whether a given comment is likely to be toxic or not.

As mentioned in the cited paper on nuanced metrics: “This labeling asks raters to rate the toxicity of a comment, selecting from ‘Very Toxic’, ‘Toxic’, ‘Hard to Say’, and ‘Not Toxic’. Raters were also asked about several subtypes of toxicity, although these labels were not used for the analysis in this work. Using these rating techniques we created a dataset of 1.8 million comments, sourced from online comment forums, containing labels for toxicity and identity. While all of the comments were labeled for toxicity, and a subset of 450,000 comments was labeled for identity. Some comments labeled for identity were preselected using models built from previous iterations of identity labeling to ensure that crowd raters would see identity content frequently” (in the cited paper by Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman).

Another example of aiming to have datasets that contain illustrative toxic content involves efforts to train AI-based Natural Language Processing (NLP) conversational interactive systems. You’ve probably interacted with NLP systems such as Alexa and Siri. I’ve covered some of the difficulties and limitations of today’s NLP, including a particularly disturbing instance that occurred when Alexa proffered an unsuitable and dangerous piece of advice to children, see ການເຊື່ອມຕໍ່ທີ່ນີ້.

A recent study sought to use nine categories of social bias that were generally based on the EEOC (Equal Employment Opportunities Commission) list of protected demographic characteristics, including age, gender, nationality, physical appearance, race or ethnicity, religion, disability status, sexual orientation, and socio-economic status. According to the researchers: “It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts” (in a paper entitled “BBQ: A Hand-Built Benchmark For Question Answering” by Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, Samuel R. Bowman).

The setting up of datasets that intentionally contain biased and altogether toxic data is a rising trend in AI and is especially stoked by the advent of AI Ethics and the desire to produce Ethical AI. Those datasets can be used to train Machine Learning (ML) and Deep Learning (DL) models for detecting biases and figuring out computational patterns entailing societal toxicity. In turn, the toxicity trained ML/DL can be judiciously aimed at other AI to ascertain whether the targeted AI is potentially biased and toxic.

Furthermore, the available toxicity-trained ML/DL systems can be used to showcase to AI builders what to watch out for so they can readily inspect models to see how algorithmically imbued biases arise. Overall, these efforts are able to exemplify the dangers of toxic AI as part of AI Ethics and Ethical AI awareness all-told.

At this juncture of this weighty discussion, I’d bet that you are desirous of some further illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

ນີ້ແມ່ນ ຄຳ ຖາມທີ່ ໜ້າ ສັງເກດທີ່ຄວນພິຈາລະນາ: Does the advent of AI-based true self-driving cars illuminate anything about the utility of having datasets to devise toxic AI, and if so, what does this showcase?

ປ່ອຍໃຫ້ຂ້ອຍຈັກໜ້ອຍເພື່ອເປີດຄຳຖາມ.

ຫນ້າທໍາອິດ, ໃຫ້ສັງເກດວ່າບໍ່ມີຄົນຂັບຂອງມະນຸດທີ່ກ່ຽວຂ້ອງກັບລົດຂັບລົດຕົນເອງທີ່ແທ້ຈິງ. ຈົ່ງຈື່ໄວ້ວ່າລົດທີ່ຂັບລົດດ້ວຍຕົນເອງທີ່ແທ້ຈິງແມ່ນຂັບເຄື່ອນໂດຍຜ່ານລະບົບການຂັບລົດ AI. ບໍ່​ມີ​ຄວາມ​ຕ້ອງ​ການ​ສໍາ​ລັບ​ການ​ຂັບ​ລົດ​ຂອງ​ມະ​ນຸດ​, ແລະ​ບໍ່​ມີ​ການ​ຈັດ​ສັນ​ສໍາ​ລັບ​ມະ​ນຸດ​ຂັບ​ລົດ​ໄດ້​. ສໍາລັບການຄຸ້ມຄອງຢ່າງກວ້າງຂວາງແລະຢ່າງຕໍ່ເນື່ອງຂອງຂ້ອຍກ່ຽວກັບຍານພາຫະນະອັດຕະໂນມັດ (AVs) ແລະໂດຍສະເພາະແມ່ນລົດທີ່ຂັບລົດດ້ວຍຕົນເອງ, ເບິ່ງ ການເຊື່ອມຕໍ່ທີ່ນີ້.

ຂ້າພະເຈົ້າຕ້ອງການໃຫ້ຄວາມກະຈ່າງແຈ້ງຕື່ມອີກວ່າມີຄວາມໝາຍແນວໃດເມື່ອຂ້າພະເຈົ້າອ້າງເຖິງລົດທີ່ຂັບລົດດ້ວຍຕົນເອງແທ້ໆ.

ເຂົ້າໃຈລະດັບຂອງລົດທີ່ຂັບເອງ

ໃນຖານະເປັນຄວາມກະຈ່າງແຈ້ງ, ລົດຂັບລົດດ້ວຍຕົນເອງທີ່ແທ້ຈິງແມ່ນລົດທີ່ AI ຂັບລົດດ້ວຍຕົວມັນເອງທັງຫມົດແລະບໍ່ມີການຊ່ວຍເຫຼືອຈາກມະນຸດໃດໆໃນລະຫວ່າງການຂັບລົດ.

ຍານພາຫະນະທີ່ບໍ່ມີຄົນຂັບເຫຼົ່ານີ້ຖືກພິຈາລະນາໃນລະດັບ 4 ແລະລະດັບ 5 (ເບິ່ງຄໍາອະທິບາຍຂອງຂ້ອຍຢູ່ທີ່ ລິ້ງນີ້ຢູ່ນີ້), ໃນຂະນະທີ່ລົດທີ່ຮຽກຮ້ອງໃຫ້ມີຄົນຂັບຮ່ວມກັນເພື່ອແບ່ງປັນຄວາມພະຍາຍາມໃນການຂັບລົດແມ່ນປົກກະຕິແລ້ວພິຈາລະນາໃນລະດັບ 2 ຫຼືລະດັບ 3. ລົດທີ່ຮ່ວມໃນການຂັບລົດແມ່ນໄດ້ຖືກອະທິບາຍວ່າເປັນເຄິ່ງອັດຕະໂນມັດ, ແລະໂດຍທົ່ວໄປແລ້ວປະກອບດ້ວຍຄວາມຫລາກຫລາຍຂອງ. add-ons ອັດຕະໂນມັດທີ່ຖືກເອີ້ນວ່າ ADAS (Advanced Driver-Assistance Systems).

ຍັງ​ບໍ່​ທັນ​ມີ​ລົດ​ທີ່​ຂັບ​ລົດ​ດ້ວຍ​ຕົນ​ເອງ​ຕົວ​ຈິງ​ຢູ່​ໃນ​ລະດັບ 5, ແລະ​ພວກ​ເຮົາ​ຍັງ​ບໍ່​ທັນ​ຮູ້​ວ່າ​ສິ່ງ​ນີ້​ຈະ​ບັນລຸ​ໄດ້​ຫຼື​ບໍ່​ມັນ​ຈະ​ໃຊ້​ເວລາ​ດົນ​ປານ​ໃດ.

ໃນຂະນະດຽວກັນ, ຄວາມພະຍາຍາມລະດັບ 4 ກໍາລັງຄ່ອຍໆພະຍາຍາມເອົາບາງສ່ວນໂດຍການດໍາເນີນການທົດລອງທາງສາທາລະນະທີ່ແຄບແລະເລືອກ, ເຖິງແມ່ນວ່າມີຂໍ້ຂັດແຍ້ງກ່ຽວກັບວ່າການທົດສອບນີ້ຄວນຈະຖືກອະນຸຍາດຫຼືບໍ່ (ພວກເຮົາທັງຫມົດແມ່ນຫມູ guinea ທີ່ມີຊີວິດຫຼືຕາຍໃນການທົດລອງ. ສະຖານທີ່ຢູ່ໃນທາງດ່ວນແລະ byways ຂອງພວກເຮົາ, ບາງຄົນຂັດແຍ້ງ, ເບິ່ງການຄຸ້ມຄອງຂອງຂ້ອຍຢູ່ ລິ້ງນີ້ຢູ່ນີ້).

ເນື່ອງຈາກວ່າລົດເຄິ່ງອັດຕະໂນມັດຕ້ອງການຄົນຂັບລົດຂອງມະນຸດ, ການຮັບຮອງເອົາລົດປະເພດເຫຼົ່ານັ້ນຈະບໍ່ມີຄວາມແຕກຕ່າງກັນຫຼາຍກ່ວາການຂັບຂີ່ລົດ ທຳ ມະດາ, ໃນປັດຈຸບັນ, ຈຸດຕໍ່ໄປແມ່ນສາມາດໃຊ້ໄດ້ໂດຍທົ່ວໄປ).

ສຳ ລັບລົດເຄິ່ງອັດຕະໂນມັດ, ມັນເປັນສິ່ງ ສຳ ຄັນທີ່ປະຊາຊົນຕ້ອງໄດ້ຮັບການແຈ້ງເຕືອນລ່ວງ ໜ້າ ກ່ຽວກັບແງ່ລົບກວນທີ່ ກຳ ລັງເກີດຂື້ນໃນໄລຍະມໍ່ໆມານີ້, ເຖິງແມ່ນວ່າເຖິງແມ່ນວ່າຄົນຂັບລົດມະນຸດເຫລົ່ານັ້ນຈະສືບຕໍ່ໂຄສະນາວິດີໂອກ່ຽວກັບຕົວເອງທີ່ ກຳ ລັງນອນຫລັບຢູ່ລໍ້ຂອງລົດລະດັບ 2 ຫລືລົດ 3 , ພວກເຮົາທຸກຄົນຕ້ອງຫລີກລ້ຽງການຫຼອກລວງໃນການເຊື່ອວ່າຜູ້ຂັບຂີ່ສາມາດເອົາຄວາມສົນໃຈຂອງພວກເຂົາອອກຈາກວຽກຂັບລົດໃນຂະນະທີ່ຂັບຂີ່ລົດເຄິ່ງອັດຕະໂນມັດ.

ທ່ານເປັນຝ່າຍທີ່ຮັບຜິດຊອບຕໍ່ການກະ ທຳ ຂອງການຂັບຂີ່ຂອງຍານພາຫະນະໂດຍບໍ່ສົນເລື່ອງອັດຕະໂນມັດອາດຈະຖືກໂຍນເຂົ້າໃນລະດັບ 2 ຫລືລະດັບ 3.

Self-Driving Cars And Steering Clear Of Toxic AI

ສຳ ລັບພາຫະນະຂັບລົດທີ່ແທ້ຈິງໃນລະດັບ 4 ແລະລະດັບ 5, ຈະບໍ່ມີຄົນຂັບລົດທີ່ເປັນມະນຸດເຂົ້າຮ່ວມໃນວຽກງານຂັບຂີ່.

ຜູ້ປະກອບອາຊີບທຸກຄົນຈະເປັນຜູ້ໂດຍສານ.

AI ແມ່ນ ກຳ ລັງຂັບລົດຢູ່.

ລັກສະນະ ໜຶ່ງ ທີ່ຈະຕ້ອງໄດ້ປຶກສາຫາລືກັນໃນທັນທີແມ່ນກ່ຽວຂ້ອງກັບຄວາມຈິງທີ່ວ່າ AI ທີ່ກ່ຽວຂ້ອງກັບລະບົບຂັບຂີ່ AI ໃນປະຈຸບັນບໍ່ແມ່ນເລື່ອງງ່າຍ. ເວົ້າອີກຢ່າງ ໜຶ່ງ, AI ແມ່ນລວມທັງການຂຽນໂປແກຼມຄອມພິວເຕີ້ແລະສູດການຄິດໄລ່ຄອມພິວເຕີ້, ແລະແນ່ນອນວ່າມັນບໍ່ສາມາດມີເຫດຜົນໃນລັກສະນະດຽວກັນກັບມະນຸດ.

ເປັນຫຍັງອັນນີ້ຈິ່ງເນັ້ນ ໜັກ ຕື່ມກ່ຽວກັບ AI ບໍ່ມີຄວາມຮູ້ສຶກອ່ອນໄຫວ?

ເນື່ອງຈາກວ່າຂ້ອຍຕ້ອງການຊີ້ໃຫ້ເຫັນວ່າເມື່ອສົນທະນາກ່ຽວກັບບົດບາດຂອງລະບົບຂັບຂີ່ AI, ຂ້ອຍບໍ່ໄດ້ສະແດງຄຸນນະພາບຂອງມະນຸດຕໍ່ AI. ກະລຸນາຮັບຊາບວ່າມີແນວໂນ້ມທີ່ ກຳ ລັງ ດຳ ເນີນຢູ່ເລື້ອຍໆແລະເປັນອັນຕະລາຍໃນທຸກມື້ນີ້ໃນການລັກລອບຄ້າມະນຸດ AI. ໂດຍເນື້ອແທ້ແລ້ວ, ປະຊາຊົນ ກຳ ລັງມອບຄວາມຮູ້ສຶກຄ້າຍຄືກັບມະນຸດໃຫ້ກັບ AI ໃນປະຈຸບັນນີ້, ເຖິງວ່າຈະມີຄວາມຈິງທີ່ບໍ່ສາມາດປະຕິເສດໄດ້ແລະບໍ່ມີຄ່າຫຍັງເລີຍວ່າບໍ່ມີ AI ດັ່ງກ່າວມີມາກ່ອນ.

ດ້ວຍຄວາມກະຈ່າງແຈ້ງນັ້ນ, ທ່ານສາມາດນຶກພາບວ່າລະບົບຂັບຂີ່ AI ຈະບໍ່ຮູ້ກ່ຽວກັບລັກສະນະຂອງການຂັບຂີ່. ການຂັບຂີ່ແລະສິ່ງທັງ ໝົດ ທີ່ມັນຕ້ອງການຈະຕ້ອງມີໂຄງການເປັນສ່ວນ ໜຶ່ງ ຂອງຮາດແວແລະຊອບແວຂອງລົດທີ່ຂັບເອງ.

ໃຫ້ທ່ານເຂົ້າໄປໃນຫຼາຍໆດ້ານທີ່ມາຫຼີ້ນໃນຫົວຂໍ້ນີ້.

ກ່ອນອື່ນ, ມັນເປັນສິ່ງສໍາຄັນທີ່ຈະຮັບຮູ້ວ່າບໍ່ແມ່ນລົດທີ່ຂັບລົດດ້ວຍຕົນເອງ AI ທັງຫມົດແມ່ນຄືກັນ. ຜູ້ຜະລິດລົດຍົນ ແລະບໍລິສັດເທັກໂນໂລຍີການຂັບລົດເອງແຕ່ລະຄົນກຳລັງໃຊ້ວິທີທີ່ຈະອອກແບບລົດທີ່ຂັບລົດດ້ວຍຕົນເອງ. ດັ່ງນັ້ນ, ມັນເປັນການຍາກທີ່ຈະອອກຄໍາຖະແຫຼງທີ່ກວ້າງຂວາງກ່ຽວກັບສິ່ງທີ່ລະບົບຂັບລົດ AI ຈະເຮັດຫຼືບໍ່ເຮັດ.

ຍິ່ງໄປກວ່ານັ້ນ, ທຸກຄັ້ງທີ່ລະບຸວ່າລະບົບການຂັບຂີ່ AI ບໍ່ໄດ້ເຮັດບາງສິ່ງໂດຍສະເພາະ, ອັນນີ້, ໃນທີ່ສຸດ, ສາມາດເອົາຊະນະນັກພັດທະນາໄດ້ວ່າໃນຄວາມເປັນຈິງແລ້ວວາງໂປຣແກມຄອມພິວເຕີໃຫ້ເຮັດສິ່ງນັ້ນ. ເທື່ອລະກ້າວ, ລະບົບການຂັບຂີ່ AI ກຳ ລັງໄດ້ຮັບການປັບປຸງແລະຂະຫຍາຍອອກເທື່ອລະກ້າວ. ຂໍ້ ຈຳ ກັດທີ່ມີຢູ່ໃນປະຈຸບັນອາດຈະບໍ່ມີຢູ່ໃນການເຮັດຊ້ ຳ ຄືນອີກຫຼືໃນລຸ້ນຂອງລະບົບ.

ຂ້າ​ພະ​ເຈົ້າ​ຫວັງ​ວ່າ​ມັນ​ຈະ​ສະ​ຫນອງ​ໃຫ້​ເປັນ​ພຽງ​ພໍ​ຂອງ​ຄໍາ​ແນະ​ນໍາ​ທີ່​ຈະ underlie ສິ່ງ​ທີ່​ຂ້າ​ພະ​ເຈົ້າ​ກໍາ​ລັງ​ຈະ​ກ່ຽວ​ຂ້ອງ​.

There are numerous potential and someday likely to be realized AI-infused biases that are going to confront the emergence of autonomous vehicles and self-driving cars, see for example my discussion at ການເຊື່ອມຕໍ່ທີ່ນີ້ ແລະ ການເຊື່ອມຕໍ່ທີ່ນີ້. We are still in the early stages of self-driving car rollouts. Until the adoption reaches a sufficient scale and visibility, much of the toxic AI facets that I’ve been predicting will ultimately occur are not yet readily apparent and have not yet garnered widespread public attention.

Consider a seemingly straightforward driving-related matter that at first might seem entirely innocuous. Specifically, let’s examine how to properly determine whether to stop for awaiting “wayward” pedestrians that do not have the right-of-way to cross a street.

ແນ່ນອນເຈົ້າໄດ້ຂັບລົດໄປ ແລະໄດ້ພົບກັບຄົນຍ່າງທາງທີ່ລໍຖ້າຂ້າມຖະໜົນ ແຕ່ເຂົາເຈົ້າບໍ່ມີສິດທີ່ຈະເຮັດແນວນັ້ນ. ນີ້ຫມາຍຄວາມວ່າທ່ານມີການຕັດສິນໃຈທີ່ຈະຢຸດແລະປ່ອຍໃຫ້ພວກເຂົາຂ້າມ. ທ່ານສາມາດດໍາເນີນການໄດ້ໂດຍບໍ່ປ່ອຍໃຫ້ພວກເຂົາຂ້າມຜ່ານແລະຍັງຢູ່ໃນກົດລະບຽບການຂັບຂີ່ທີ່ຖືກຕ້ອງຕາມກົດຫມາຍຂອງການເຮັດເຊັ່ນນັ້ນ.

ການສຶກສາກ່ຽວກັບວິທີທີ່ຄົນຂັບລົດມະນຸດຕັດສິນໃຈຢຸດ ຫຼືບໍ່ຢຸດສຳລັບຄົນຍ່າງທາງນັ້ນ ໄດ້ແນະນຳວ່າ ບາງຄັ້ງຄົນຂັບລົດມະນຸດເລືອກໂດຍພື້ນຖານຂອງຄວາມລຳອຽງທີ່ບໍ່ມີເຫດຜົນ. ຄົນຂັບລົດມະນຸດອາດຈະແນມເບິ່ງຄົນຍ່າງ ແລະເລືອກທີ່ຈະບໍ່ຢຸດ, ເຖິງແມ່ນວ່າເຂົາເຈົ້າຈະຢຸດແລ້ວກໍຕາມ ຖ້າຄົນຍ່າງມີຮູບຮ່າງທີ່ແຕກຕ່າງເຊັ່ນ: ອີງໃສ່ເຊື້ອຊາດ ຫຼືເພດ. ຂ້າ​ພະ​ເຈົ້າ​ໄດ້​ກວດ​ກາ​ນີ້​ຢູ່ ການເຊື່ອມຕໍ່ທີ່ນີ້.

How will AI driving systems be programmed to make that same kind of stop-or-go decision?

You could proclaim that all AI driving systems should be programmed to always stop for any waiting pedestrians. This greatly simplifies the matter. There really isn’t any knotty decision to be made. If a pedestrian is waiting to cross, regardless of whether they have the right-of-way or not, ensure that the AI self-driving car comes to a stop so that the pedestrian can cross.

ງ່າຍທີ່ສຸດ.

Life is never that easy, it seems. Imagine that all self-driving cars abide by this rule. Pedestrians would inevitably realize that the AI driving systems are, shall we say, pushovers. Any and all pedestrians that want to cross the street will willy-nilly do so, whenever they wish and wherever they are.

Suppose a self-driving car is coming down a fast street at the posted speed limit of 45 miles per hour. A pedestrian “knows” that the AI will bring the self-driving car to a stop. So, the pedestrian darts into the street. Unfortunately, physics wins out over AI. The AI driving system will try to bring the self-driving car to a halt, but the momentum of the autonomous vehicle is going to carry the multi-ton contraption forward and ram into the wayward pedestrian. The result is either injurious or produces a fatality.

Pedestrians do not usually try this type of behavior when there is a human driver at the wheel. Sure, in some locales there is an eyeball war that takes place. A pedestrian eyeballs a driver. The driver eyeballs the pedestrian. Depending upon the circumstance, the driver might come to a stop or the driver might assert their claim to the roadway and ostensibly dare the pedestrian to try and disrupt their path.

We presumably do not want AI to get into a similar eyeball war, which also is a bit challenging anyway since there isn’t a person or robot sitting at the wheel of the self-driving car (I’ve discussed the future possibility of robots that drive, see ການເຊື່ອມຕໍ່ທີ່ນີ້). Yet we also cannot allow pedestrians to always call the shots. The outcome could be disastrous for all concerned.

You might then be tempted to flip to the other side of this coin and declare that the AI driving system should never stop in such circumstances. In other words, if a pedestrian does not have a proper right of way to cross the street, the AI should always assume that the self-driving car ought to proceed unabated. Tough luck to those pedestrians.

Such a strict and simplistic rule is not going to be well-accepted by the public at large. People are people and they won’t like being completely shut out of being able to cross the street, despite that they are legally lacking a right-of-way to do so in various settings. You could easily anticipate a sizable uproar from the public and possibly see a backlash occur against the continued adoption of self-driving cars.

Darned if we do, and darned if we don’t.

I hope this has led you to the reasoned alternative that the AI needs to be programmed with a semblance of decision-making about how to deal with this driving problem. A hard-and-fast rule to never stop is untenable, and likewise, a hard-and-fast rule to always stop is untenable too. The AI has to be devised with some algorithmic decision-making or ADM to deal with the matter.

You could try using a dataset coupled with an ML/DL approach.

Here’s how the AI developers might decide to program this task. They collect data from video cameras that are placed all around a particular city where the self-driving car is going to be used within. The data showcases when human drivers opt to stop for pedestrians that do not have the right-of-way. It is all collected into a dataset. By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop.

Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car. Problem solved!

But, is it truly solved?

Recall that I had already pointed out that there are research studies showcasing that human drivers can be biased in their choices of when to stop for pedestrians. The collected data about a particular city is presumably going to contain those biases. An AI ML/DL based on that data will then likely model and reflect those same biases. The AI driving system will merely carry out the same existent biases.

To try and contend with the issue, we could put together a dataset that in fact has such biases. We either find such a dataset and then label the biases, or we synthetically create a dataset to aid in illustrating the matter.

All of the earlier identified steps would be undertaken, including:

  • Setup a dataset that intentionally contains this particular bias
  • Use the dataset to train Machine Learning (ML) and Deep Learning (DL) models about detecting this specific bias
  • Apply the bias-trained ML/DL toward other AI to ascertain whether the targeted AI is potentially biased in a likewise manner
  • Make available the bias-trained ML/DL to showcase to AI builders what to watch out for so they can readily inspect their models to see how algorithmically imbued biases arise
  • Exemplify the dangers of biased AI as part of AI Ethics and Ethical AI awareness via this added specific example
  • ອື່ນ ໆ

ສະຫຼຸບ

Let’s revisit the opening line.

ມັນໃຊ້ເວລາຫນຶ່ງທີ່ຈະຮູ້ຈັກຫນຶ່ງ.

Some interpret that this incredibly prevalent saying implies that when it comes to ferreting out toxic AI, we should be giving due credence to building and using toxic AI toward discovering and dealing with other toxic AI. Bottom line: Sometimes it takes a thief to catch another thief.

A voiced concern is that maybe we are going out of our way to start making thieves. Do we want to devise AI that is toxic? Doesn’t that seem like a crazy idea? Some vehemently argue that we should ban all toxic AI, including such AI that was knowingly built even if purportedly for a heroic or gallant AI ສໍາລັບທີ່ດີ ຈຸດປະສົງ.

Squelch toxic AI in whatever clever or insidious guise that it might arise.

One final twist on this topic for now. We generally assume that this famous line has to do with people or things that do bad or sour acts. That’s how we land on the notion of it takes a thief to catch a thief. Maybe we should turn this saying on its head and make it more of a happy face than a sad face.

ນີ້ແມ່ນວິທີການ.

If we want AI that is unbiased and non-toxic, it might be conceivable that it takes one to know one. Perhaps it takes the greatest and best to recognize and beget further greatness and goodness. In this variant of the sage wisdom, we keep our gaze on the happy face and aim to concentrate on devising AI For Good.

That would be a more upbeat and satisfyingly cheerful viewpoint on it takes one to know one, if you know what I mean.

Source: https://www.forbes.com/sites/lanceeliot/2022/06/15/ai-ethics-shocking-revelation-that-training-ai-to-be-toxic-or-biased-might-be-beneficial-including-for-those-autonomous-self-driving-cars/