Will AI harm human?人工智能會傷害人類嗎?

Francis Wong
4 min readJan 28, 2024

--

This article is first published in the website of a Chinese newspaper in Hong Kong. A translated English version by Bing Copilot will follow the Chinese version.

Will AI harm human, a robot shake shake hands with human.
AI can be a good partner to human

人工智能相信是近期相當受關注的話題,對於並不理解人工智能運作原理的社會大衆,見到如ChatGPT的大語言模型,自然會覺得神奇,仿佛人工智能是有自我意識的新物種。加上人工智能在一定程度上改變了人們工作習慣,工作內容和工作機會,這樣社會上對人工智能的恐懼是可以理解的。那麼AI真的會對人類做成傷害嗎?AI 實際上是如何運作?
如果AI作為人類的工具,在人類不小心使用下對其他人造成傷害,這樣和人類使用菜刀的情形一樣,那不是AI的問題。如果說AI有自主意識,達到人工通用智能AGI的程度,並且視人類為敵人的話,那可能嗎?
如果你看看世界各頂尖大學,AI 專業的收生要求,不難發現都要求良好的數學基礎。是的,AI 的底層就是數學和統計學等知識組成的。AI 的發展其實已經很久,由1950年著名的圖寧測試 (Turing Test) 開始,科學家就一直追求電腦可以有人類的智慧,簡單來說就是可以與人類互動,而人類分不出對方不是人類。但由於當時電腦運算能力比較低,亦未有互聯網,所以既無算力也沒有足夠大量的資料集作為訓練用途,加上當時的AI理論基礎亦並不成熟,所以AI 更多是以硬指令 (hard code) 來完成,即是只對預先定義的問題提供預定的答案。這樣的效果當然離人們的期望差太遠了。現在各種條件具備,市場也看到前景,資金當然會加緊進入。
最近IEEE Spectrum的一篇文章製作了一個矩陣,矩陣為正方形分為4個象限,垂直軸為AGI出現的可能性,上面為高,下面為低。水平軸為AGI對人類的潛在危害的可能性,左面為高,右面為低。文章收集了23位AI專家的看法,並分別放進4個象限裡。左上角的人認為AGI出現機會大,對人有危害的機會也大。右下角的人認為AGI出現的機會低,對人有危害的機會也低。在左上角有5人,包括 “現代AI三父” 之一的Geoffrey Hinton令人意外。右上角有3人,他們認為AGI出現機會高,可是對人有危害的機會低,包括OpenAI的聯合創始人Sam Altman。另外有兩人,包括另一個 “現代AI三父” Yoshua Bengio,他們認為AGI出現機會高,可是認為AGI對人有危害的機會中立,所以在矩陣上部的中間。在矩陣下部的中間還有1位是NYU的教授 Gary Marcus,他認為AGI出現的機會低,可是對人有危害的機會中立。大部分的人,共12人,包括另一個 “現代AI三父” Yann LeCun 和 Landing AI的創始人,史丹福教授吳恩達都在右下角,他們認為AGI出現機會低,對人有危害的機會也低,他們認為我們距離AGI還有很長的路要走。而最後沒有一個人認為AGI出現的機會低,但對人有危害的機會高。
由於多數的人都在左上角和右下角,由此可見他們之間的看法兩極。雖然有較多人認為AGI出現機會低,對人有危害的機會也低,可是也有四分之一的人和多數人持相反意見。所以社會大眾對AI有所恐懼也是可以理解的。如果有更多人理解和應用AI,相信對它的恐懼就會消失。就像曾幾何時,人們相信拍照會把人的靈魂拍在照片裡,電影裡的人可以活生生從電影裡走出來一樣。

中文版於2023年11月2日刊於東網。

https://hk.on.cc/hk/bkn/cnt/finance/20231102/bkn-20231102070026779-1102_00842_001.html

English translated version:

AI has been a popular topic of discussion lately. For those who do not understand how AI works, seeing large language models like ChatGPT can be fascinating. It seems like AI is a new species with self-awareness. Additionally, AI has changed people’s work habits, job content, and job opportunities to some extent. Therefore, it is understandable that people are afraid of AI. But will AI really harm humans? How does AI actually work?

If AI is used as a tool by humans and accidentally causes harm to others, it is not AI’s fault, just like when humans use knives. If AI has self-awareness and reaches the level of human-level general intelligence (AGI) and views humans as enemies, is that possible?

If you look at the admission requirements for AI majors at top universities around the world, you will find that they all require a good foundation in mathematics. Yes, AI is composed of knowledge such as mathematics and statistics. In fact, AI has been around for a long time. Starting with the famous Turing Test in 1950, scientists have been pursuing the idea that computers can have human intelligence, which is simply the ability to interact with humans, and humans cannot tell if the other party is not human. However, at that time, the computing power of computers was relatively low, there was no Internet, so there was no computing power or sufficient large data sets for training purposes. In addition, the theoretical foundation of AI at that time was not mature, so AI was more based on hard code to complete, that is, to provide predetermined answers to predefined problems. The effect of this approach was far from people’s expectations. Now that various conditions are in place and the market sees prospects, funding will certainly accelerate.

Recently, an article in IEEE Spectrum created a matrix that is divided into four quadrants. The vertical axis represents the possibility of AGI appearing, with high at the top and low at the bottom. The horizontal axis represents the potential harm of AGI to humans, with high on the left and low on the right. The article collected the opinions of 23 AI experts and put them into four quadrants. People in the upper left corner believe that the chance of AGI appearing is high, and the chance of harm to humans is also high. People in the lower right corner believe that the chance of AGI appearing is low, and the chance of harm to humans is also low. There are 5 people in the upper left corner, including Geoffrey Hinton, one of the “fathers of modern AI,” which is surprising. There are 3 people in the upper right corner, who believe that the chance of AGI appearing is high, but the chance of harm to humans is low, including Sam Altman, co-founder of OpenAI. There are also two people, including another “father of modern AI” Yoshua Bengio, who believe that the chance of AGI appearing is high, but the chance of harm to humans is neutral, so they are in the middle of the upper part of the matrix. In the middle of the lower part of the matrix, there is 1 person, Gary Marcus, a professor at NYU, who believes that the chance of AGI appearing is low, but the chance of harm to humans is neutral. Most people, a total of 12 people, including Yann LeCun, another “father of modern AI,” and Andrew Ng, founder of Landing AI and a professor at Stanford University, are in the lower right corner. They believe that the chance of AGI appearing is low, and the chance of harm to humans is also low. They believe that we still have a long way to go before we reach AGI. Finally, no one thinks that the chance of AGI appearing is low, but the chance of harm to humans is high.

Since most people are in the upper left and lower right corners, it can be seen that their views are polarized. Although more people believe that the chance of AGI appearing is low and the chance of harm to humans is also low, there are still a quarter of people and the majority of people who hold the opposite view. Therefore, it is understandable that the public is afraid of AI. If more people understand and apply AI, I believe that their fear of it will disappear. Just like in the past, people believed that taking pictures would capture people’s souls in the photos, and people in movies could walk out of the movies alive.

The Chinese version is published on 2 Nov 2023 in on.cc

https://hk.on.cc/hk/bkn/cnt/finance/20231102/bkn-20231102070026779-1102_00842_001.html

--

--

Francis Wong

An executive, software developer, university lecturer and qualified accountant. Research interest: leadership, employee engagement, IS, ML and AI.