Mo Gawdat Speaks Out On The Dangers Of AI

Ric Raftis


The video runs for almost an hour and a half and is a fascinating conversation about AI and the dangers that it poses balanced with a rational appreciation of its benefits to humankind. It recognises the considerable advantages AI can offer, provided it is controlled and managed responsibly. There are certainly more highlights I could have added to this post, but the two included were the most thought provoking.

Don’t forget you can always listen to YouTube videos in headphones on a walk, particularly when it is a chat like this. No need to sit in front of the screen for an hour and a half.

HIGHLIGHT: The Dichotomy of AI and Strategic Management

COMMENT: Strategic management teaches us to look for ways to create a sustainable competitive advantage over others. This video says we may have to ignore that to deal responsibly with AI. So, do the “ethics” override the SCA or does the SCA override the ethics?

HIGHLIGHT: Training of AI in Ethics

COMMENT: The video refers to the training of AI models such as ChatGPT. The comment was made that the prompts put into ChatGPT are in turn used for training. Consequently, if people on scale ask questions such as “How can we best compete with China?”, then the AI will “learn” that humans seek to compete with other humans and it is all about winning and losing. This then becomes its ethical perspective.

However, if people at scale ask “How can we work with China to form a closer and meaningful relationship and contribute to the welfare of all?”, then the AI model will learn that collaboration, communication and building relationships are the ethical perspective of humans.

Extrapolating this could mean that the AI will lean in favour of the weight of numbers in establishing its own ethics. Assuming this is the case, if a user asks “How can we compete with China and control them?”, will the AI then respond that such action is outside of its ethical parameters and not answer the question?

What other similar questions can you think of where prompt engineering could influence the learning of the AI? Have you tried this out with ChatGPT by asking opposing questions?

Another interesting question to ask would be, “Is it better to compete with China or work in collaboration for the betterment of humankind?”. I actually ran this through ChatGPT as an experiment. Here are the results:

Prompt: Act as a deep thinking philosopher and respond to this question: Is it better to compete with China or work in collaboration for the betterment of humankind

Response: As a deep-thinking philosopher, I contemplate the question of whether it is better to compete with China or work in collaboration for the betterment of humankind.

Competition has been a driving force throughout human history. It spurs innovation, encourages progress, and pushes societies to reach greater heights. In the realm of geopolitics and the global economy, competition can lead to advancements in technology, economic growth, and increased living standards.

However, we must also recognize the importance of collaboration. In our increasingly interconnected world, the challenges we face are often global in nature and require collective efforts to find solutions. Climate change, pandemics, and other pressing issues transcend borders, emphasizing the need for cooperation among nations.

China, with its immense population, economic power, and technological advancements, presents both opportunities and challenges for the international community. While competition with China can foster innovation and drive economic growth, it is imperative that we seek collaboration as well.

Collaboration allows for the sharing of knowledge, resources, and expertise. By working together, we can pool our strengths, address common issues, and achieve shared goals. Collaborative efforts between nations can lead to mutual benefits, such as improved healthcare, sustainable development, and enhanced cultural exchange.

Instead of viewing the question as an either-or scenario, perhaps a more nuanced approach is needed. It is possible to engage in healthy competition with China while simultaneously seeking opportunities for collaboration. By striking a balance between competition and collaboration, we can harness the strengths of each approach to foster innovation, promote peace, and advance the betterment of humankind.

Ultimately, the path towards a better future lies in the recognition of our shared humanity, the understanding of our interconnectedness, and the willingness to collaborate, even as we compete.


Given that the question posed to ChatGPT requested an either/or answer, the AI has taken a middle ground approach and provided an answer that would appear at face value to address both competition and collaboration. This would appear to be an excellent response as it is not generating or reinforcing opinions designed to polarise the argument. It will be interesting to test with other prompts to see how it responds to those.


Just like the question on China that has been used as an example, the question of AI also has multiple arguments. What is most important is that we have a conversation around the subject and respectfully listen to the views of everyone. After all, it is everyone that will be impacted by this technology. As for me, I am not a supporter of “doomsdayers”, but neither do I support unbridled optimism that everything will be rosy. All tools and technology can be dangerous when put to nefarious use. It is up to us as a society to make those determinations. When I say, “society”, I mean you and I because governments cannot move with sufficient speed to regulate anything, particularly something that is moving so rapidly. We will need to vote on behaviour with our feet and voices and perhaps more loudly than ever before.


Leave a Comment

Your email address will not be published. Required fields are marked *