PUBLISHED:
In my previous blog post, I engaged in a creative exercise to envision Peter Drucker’s viewpoints about the benefits of artificial intelligence (AI). In that post, I explored how Drucker would have viewed the importance of AI for enhancing effectiveness, efficiency, strategic focus, and creativity. I also highlighted that he would have probably advocated a symbiotic approach to decision-making where AI is used for complementing and augmenting human decision-making.
In this post, I would like to envision Drucker’s possible viewpoints about the problems or hurdles associated with AI use in organizations. What negative views could have Drucker potentially expressed if he provided such commentary on using AI in organizations?
I would like to remind readers that while Drucker have mentioned the term “Artificial intelligence” in his work (in “Managing for the Future” & “The Ecological Vision”), his definition of AI and the role he considered for machines (e.g., computers) in decision-making was limited by the technological capabilities of that time. In my posts, I have tried to explore Drucker’s view beyond those limitations.
AI and Problems in Decision-Making
As discussed in my previous post, Drucker’s reference to computers as “morons” might not be entirely accurate today in the world of effective generative and predictive AI models. However, one aspect of his criticism of computers, or their use for analysis, stays valid even in the age of advanced AI algorithms. AI tools and algorithms are developed by humans and are trained on available data. Thus, AI tools are only as accurate as the available data they are trained on. AI algorithms such as machine learning can exemplify the notion of “garbage-in, garbage-out” if trained on inaccurate, biased, or incomplete data. In addition, when data is not available or when the decision environment is entirely volatile, uncertain, and unpredictable such that data from the past is not relevant, intuitive decision making can generate better outcomes than data-driven analysis. Therefore, delegating the entire decision process to an AI system would be problematic, especially in complex environments where the accuracy and integrity of data are not guaranteed. I believe that Drucker would have also pushed back against the lousy use of these tools. In this sense, computers can still be considered morons!
Ethical Concerns Due to Lack of Human Judgment
Drucker placed a significant emphasis on human judgment in decision-making processes. He believed that effective management requires value-based judgment and ethical decision-making, qualities he considered uniquely human. Drucker would have been concerned about AI systems lacking the ability to understand complex human contexts, leading to potentially flawed (e.g., biased) decision-making processes. Indeed, Drucker viewed technology as a double-edged sword that can be used for good or evil. Drucker emphasized that technological knowledge requires responsibility, and innovation needs to serve society. In one of his books, The First Technological Revolution and Its Lessons, he wrote that “… a time of true technological revolution is not a time for exultation. It is not a time for despair either. It is a time for work and for responsibility" (Drucker, 1965).
Job Displacement and Inequality
Drucker was concerned about social inequality and believed that management, as a discipline, had a significant role in addressing societal issues. He wrote a lot about industrial workers and the challenges they had to face because of automation. With AI potentially concentrating wealth and power in the hands of a few, he would be similarly concerned about the dangers of widening the social gap through job displacements or wage gaps. For example, AI could replace many of the knowledge workers in the same way that automation replaced many of the industrial workers and created long-lasting societal and political challenges.
Drucker would have also advocated for policies or regulations that promote equitable distribution of the benefits of AI technologies, ensuring that the advancements benefit society. To avoid massive job displacement due to AI advancements, he would probably suggest technological training for all workers. He was a strong advocate of "lifelong learning" and "continuous learning" to keep up with advancing technological change.
Overreliance on AI
Drucker viewed technology as a methodology for work. He advocated for the thoughtful use of technology in organizations, focusing on how it aligns with human goals and values while improving productivity. This perspective implies a concern about overreliance on technology without a sufficient understanding of its possible negative implications. Drucker’s general philosophy on technology and management suggests that he would have been cautious about organizations and societies becoming overly dependent on AI. Drucker had written about concerns regarding workers being the "slave to the machine" in the era of automation.
Similarly, Drucker might have been concerned that an overreliance on AI for various tasks could diminish the use of creative thinking, hindering innovative problem-solving. However, this is an area where AI is supposed to help if it is used as a complement (not a replacement) to human thinking and judgment. As an example, recent advances in generative AI (e.g., ChatGPT) can be used to build on or enhance human imagination and thinking if it is used to complement human thinking for idea generation.
Concluding Remarks
In my two blog posts about Drucker and AI, I tried to bring parallels from Drucker’s writings about innovation, performance, and technology to demonstrate how Drucker’s existing body of work can contribute to discussing the benefits or challenges of artificial intelligence. This way, I showcased the consistency of his principles and the relevance of his ideas to contemporary technological challenges.
Overall, and based on what I have communicated in my two blog posts on the subject, I believe that a human-AI collaboration framework in which AI is viewed to augment human judgment instead of replacing it would be in line with what Peter Drucker would advocate for if he had to take a position on AI use in organizations. Indeed, the human-AI augmentation paradigm itself must address many other important questions, such as the division of responsibility between machines and humans, that warrant dedicated discussions beyond the scope of this blog post.
+1 (626) 350-1500
1000 S. Fremont Ave - Mailbox #45
Building A10, 4th Floor, Suite 10402
Alhambra, CA 91803
Thank you for subscribing!
Oops, there was an error. Please try again later.