PUBLISHED:
Welcome to the last installment of this blog series, where we bring knowledge, wisdom, and technology together. How can human wisdom and technology, specifically AI, collaborate to redefine knowledge, knowledge work, and a knowledge society?
As we saw in the first installment, the nature of knowledge has long been a topic of discussion. Peter Drucker was concerned with the relationship between knowledge and power, and the changing nature of knowledge, particularly related to technology. In his twentieth century era, new technology in the form of atomic weapons unleashed knowledge that contained the power to destroy humankind. With this kind of technological knowledge came enormous responsibility. Technological advances related to computing in Drucker’s time carried with them fears of economic and social turmoil. Would automation of manufacturing processes and the introduction of the computer to knowledge work result in the elimination of jobs and a massive restructuring of the economy? In Drucker’s view, automation was part of a larger process of seeing production as a whole rather than a series of small parts. The new technology would cause disruption, but this was part of the long history of technological advancement in societies. The computer itself was an order taker, a human creation that was an instrument for efficiency and more productive use of knowledge work.
Today, we still wrestle with the same questions of knowledge and power and the potential for social disruption due to technological advancement. New knowledge still wields enormous power – now to influence emotions, attitudes, and beliefs, undermining the very nature of truth and trust in institutions. Rather than the physical destruction of nuclear weapons, deep fakes, data breeches, and financial scams using AI and targeted algorithms can call into question the essence of reality. Can we trust our own ears and eyes, much less the dominant institutions of society? Drucker’s order-taking computer, the “moron” of his writing, is now capable of generating material, not just computing. Generative AI is rapidly producing increasingly sophisticated texts, images, and music as it refines its use of available information and its relationship with the user.
As was discussed in previous installments, effective use of knowledge, or conversion of information into useful knowledge, involves wisdom and judgment. Here is where the differentiation between generative AI and human beings lies, and how we can better understand ways in which people and technology can collaborate effectively. Drucker often remarked that the key to effective problem solving was asking the right question, in essence, framing the problem itself. This was more important than finding the “right” answer; finding the” right” answer to the “wrong” question results in wasted time (and perhaps even more problems than the one you tried to solve).
Simply put, AI is not designed for this function. In its most basic form, AI responds to information with a limited menu of options (chatbots for customer service, for example). In its more sophisticated iterations, it is designed to fulfill goals that are predetermined by humans. If we delegate a decision to an algorithm, there are parameters that have been set by humans. Algorithms are designed to execute; even more sophisticated tools, such as ChatGPT, require human instruction. They are designed to solve problems. They are not designed to decide which questions to ask to solve the problem (although one function they can serve is to help guide people in figuring out possible questions to ask). In this sense, even today’s AI reflects Drucker’s view of computers as order-takers.
In our discussion of wisdom, we acknowledged the human problems of misinformation, narrow focus, filtering flaws, and bias as barriers to good judgment. If we are to partner with technology in the form of AI, we need to be even more cognizant of our own flaws as human beings. We are the ones driving the technology and its use. How does the delegation of decision making to algorithms perpetuate the flaws that already exist in our own judgment? What are the consequences? At what point does the decision to delegate knowledge work to a machine that has no wisdom create more social problems than it generates benefits? These are the questions we need to be asking. This requires higher order thinking that, dare I say, Drucker proposed in his concept of Management as a Liberal Art. With his pillars of knowledge, self-knowledge, wisdom and leadership, Drucker gave us valuable tools and lessons for navigating our new world of knowledge work.
So, how can we effectively collaborate with AI to create an effective knowledge society for tomorrow?
· Understand the limitations of technology: AI will reflect the quality of the information it uses. To use a tired phrase, “garbage in, garbage out.” Algorithms are also susceptible to the cultures, biases, and limitations of human beings that create them. Technology is a human creation. It is not something outside of us. Drucker told us this beginning in the 1950s!
· Understand the limitations of human beings: People will use technology to do work if they don’t want to do it. Students will use ChatGPT to write papers. People will use deep fakes and other techniques to advance their causes. This does not mean the technology is bad. It just means we need to learn how to regulate and monitor its use. Drucker used the concept of Federalism to discuss the need for guardrails and checks/balances. Our global society is having these conversations about AI now.
· Know how to leverage wisdom and judgment: Leaders need, more than ever, to emphasize skills that used to be referred to as “soft.” In a world awash with data and technology, we are increasingly in need of people well-versed in emotional intelligence, the ability to discern and make judgments in times of rapid change, and who can connect honestly with their team members. As we make decisions about delegating decisions to non-humans, the need for human connection will only increase. Our new knowledge society needs people who understand people, not just technology and data. But it also needs people who can use their wisdom and judgment to know when to rely on technology. In the words of Scott Hartley, we need both the “Fuzzy” and the “Techie.”
Rather than seeing AI as a threat to our humanity, as competition to knowledge work, we should see it as a development that allows us to think deeply about our role as human beings in our new knowledge society. In her book, In AI We Trust, Helga Nowotny, Professor Emerita of Science and Technology Studies at ETH Zurich, argues for the importance of “cathedral thinking,” the ability to appreciate the value of shared, inherited practices that are constantly being reevaluated and realigned. It includes the kind of interdisciplinary, critical thinking I discussed in the previous installment, but it also involves connecting the past with both the present and the future. In Nowotny’s words: “Wisdom consists in linking the past with the future, advising what to do in the present. It is about rendering knowledge retrievable for questions that have not yet been asked” (Nowotny, 2021). I think Nowotny makes a clear case for the relevance of Drucker’s work today. We may be frightened by new knowledge and technology, the power it has, its impact on our lives. But this is the reaction of people who are ill-equipped for facing the reality of change, change which bears the possibility of not just disruption but also opportunity. As Drucker wrote almost 100 years ago, humans have survived technological change as part of the natural order of things. The key to understanding today’s technological change is to see it as a matter of collaboration, not competition. This is the trajectory of our new knowledge society: where human wisdom and judgment augment the power of AI. AI can help us understand our own limitations and flaws, which can, in turn, make us better as people.
Agrawal, A., Gans, J., Goldfarb, A. (2023). How large language models reflect human judgment. Harvard Business Review, June 12.
Drucker, P.F. (1967). The manager and the moron. McKinsey Quarterly, 1 December. In Drucker, P.F. (1970). Technology, Management and Society. New York: Harper & Row.
Hartley, S. (2017). The fuzzy and the techie: Why the liberal arts will rule the digital world. Houghton Mifflin.
Jarrahi, M.H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61 (4).
Moser, C., den Hond, F., Lindebaum, D. (2002). What humans lose when we let AI decide. MIT Sloan Management Review, 63 (3), 11-15.
Nowotny, H. (2021). In AI we trust: Power, illusion, and control of predictive algorithms. Polity Press.
+1 (626) 350-1500
1000 S. Fremont Ave - Mailbox #45
Building A10, 4th Floor, Suite 10402
Alhambra, CA 91803
Thank you for subscribing!
Oops, there was an error. Please try again later.