The missing knowledge revolution

| Comment

We are living a fundamental (as opposed to incremental) data revolution which enables an, also fundamental, information revolution. In the classical DIK view of Data, then Information, and then Knowledge; we are still missing the third layer. That isn’t happening, neither I argue it will if we don’t have stronger leaders shaping the right direction.

Klaus Schwab introducing our panel discussion.

Earlier this month I was part of a WEF panel on the current trends of knowledge and technology. The panel had the Forum’s leads on these topics and the amazing Claire Boonstra (e.g. just check her TED talk on value based education]. The plenary had ~500 fellow global leaders from around the world. We had a great discussion, and these are some thoughts collected before, during and after the session (under Chatham House Rules).

The data revolution stands over several facts, among them the exponentially diminishing costs for generating, storing and processing it. Very roughly speaking every 5 years these costs are one order of magnitude cheaper, and we have one order of magnitude more. “Silicon Valley” is pushing this trend as part of the elastic business model: More data to profile and optimize ads and operations.

All that data, and processing tools, generate lots of useful information. ( I’m going to include here internet “content” – articles, websites, tweets – as information too). As such there is a race for optimizing, customizing and contextualizing the users experience based on it’s usage data. And here is where the current path drive us away from knowledge: shallow information and biased information.

Our brain loves novelty. As technology optimizes to our behavior, we get smaller and smaller pieces of information in a constant stream. Facebook feeds, twitter feeds, news aggregators. Less and less friction to consume (not to create) a river of bite size content made specifically for us. The flip side of contextual is biased. We get what we like, in very effective and oddly transparent silos; far away from a comprehensive view of the world’s happenings. And we get an infinite buffet of catchy bite size content that displaces any other appetite for richer content. For the first time in human history we have instant access to virtually any topic, and most of the content we consume would be considered utterly irrelevant just few years ago (pictures of your friend’s food, passing thoughts, or exboyfriend of actors in shopping malls). We are drowning on information, but we are drowning on shallow waters.

Curiously enough, competing with this passing river of tweets and Buzzfeeds, there is a (growing?) trend of more in depth, long-form, higher quality content. I’m thinking of the Atlantic, the New Yorker, the Economists, or even Medium. For Medium I could dig some statistics, and there the optimal length of a post is a meager 7 minutes of attention.

With ever decreasing friction to access, and ever refined personalization of that we want, and get, we are getting further and further away from the knowledge revolution. Knowledge creation is hard, is slow, is clumsy. In fact Knowledge management is a very important and difficult field in technology where – the point of this sessions – the Forum is investing a lot of effort. In my experience it only works if the incentives for creation and management are very aligned with the core values (like consulting companies enforcing a strict process, since they need to get their consultant up to speed very quickly).

The reason is that Knowledge generation is hard, very hard. It’s the deliberate process of concentrating in filtering and absorbing information and constructing the abstracted logic, the underpinnings that explain the information we get. It requires deep dives with lots of information, it requires focus, and energy.

Interestingly enough, in this trend to optimize information and process it, machines are starting to do “learning” in the very same way we just defined knowledge. These algorithms get access to paramount amounts of data, information, computing time and energy. And they optimize their inner “neural networks” to create abstract layers of representations to explain and predict. They can do, better than us, tasks we always thought needed human understanding, like disease diagnosis, driving cars, understand multilingual conversations in a noisy room. This is extremely fascinating, and by definition we can’t know the scope of where it could go.

We are letting the tool, the technology, drive this process. We get an optimized view of the world and our networks, driven by an infrastructure that favors herd behaviour and shallow constant streams processed by machines that are actually learning. I love technology, and I think there is a huge potential of greatness; but I fear we are leaving our brains out the process. I look around and see personalized social streams polarize fluid mobs of likes and retweets… Ochlocracy is a word we’ll start to hear more and more.

We are not seeing the knowledge revolution because machines are living the knowledge revolution, while we get distracted with social media.

I think we are just unfolding tools that will fundamentally change the future present, and we need strong and transparent leadership to make this process driven by our intended outcomes, not tool outputs, driven by inclusiveness not elites. Encourage academic progress, and blue sky research, but always be in the lookout for applied solutions to our global and local problems. Rethink how technology can support thinking, not replace it. Use technology to steer away from cookie-cutter age based education… Be daring and bold aiming for “moon shot” projects, but keeping the feet in the ground and purpose in mind.

comments powered by Disqus