As Global Head of Knowledge Management in Pharma Technical Development for Fa. Hoffmann-La Roche in Basel, Switzerland, Etzard’s focus is on delivery of processes and tools for effective knowledge utilisation.
What is Knowledge Management, and where does Agile play a role?
Knowledge Management (KM) is at the core of what every pharmaceutical company does. Similar to Learning or Digital Transformation, the aim of KM is to bestow structure over the processes and tools we use to manage our data across matrixed organisations. Recent years have brought new challenges to our knowledge flow due to agile management principles. In general, agile will require 20% more time and effort invested into information & knowledge management. But it holds the promise to dramatically increase our efficiency. Hence, KM is a central department in any pharma organisation today.
What exactly does Knowledge Management entail?
To increase our efficiency and deliver more innovative drugs pharma companies need to focus on their core function – innovation & insights. Some pharma are setting themselves very challenging targets, hoping to double or triple their efficiency in the coming years. The question is how we can leverage KM to achieve this dramatic optimisation. Other answers come out of business excellence, lean manufacturing, learning or IT; but all approaches focus on standardisation, simplification, and reducing complexity of our information environment.
KM can increase the productivity and efficiency of every one of our company’s employees. During the last few years here at Roche we have, for example, developed state-of-the-art solutions for semantic and numerical integration, automated processes for large-scale document management, and AI based information retrieval and classification tools. All our project teams follow the agile method within a very large IT organization, but embedded in a research & development function.
Where does AI sit in Knowledge Management?
Despite the current buzz around AI and ML tools, the key challenge of these 40 year old technologies is Quality. There are hundreds of new tech companies offering pharma novel insights, typically applying relatively similar ML methods, but they all struggle with the same challenge: quality. Whilst “location, location, location” may ring true for estate agents, my mantra, as a knowledge manager, will always be “quality, quality, quality”. Why? Because most current AI/ML tools have poor precision and generalisability. All of us know of many examples, where an AI solution was oversold i.e. the data quality was insufficient. In the real world, we need large projects to structure the information in a meaningful way so that these new tools can deliver the optimisation that they promise in a quality that can be trusted in the real word. That means AI/ML projects need to focus on the core deliverable, and then try to reach the necessary quality.
The recent hype around AI/ML has become possible through improved algorithms e.g in artificial neutral networks, and huge increases in hardware performance. Yet, for these new methods to work we also require traditional activities like master data management and taxonomy creation. At Roche in my department, we now have concept-based search and classification tools being used by more than 10,000 people in manufacturing and R&D, running on a high performance compute cluster, AND services for manual curation (e.g. for taxonomies and training document sets).
For us, this poses a continued challenge: how can we further automate these manual activities in a quality we can trust? Latent topic extraction (LDA) is one way in which we’ve seen the value of applied AI. For example, we’ve built interactive models that have proven particularly useful in identifying previously unknown manufacturing deviations, or assessing drug metabolism variation.
How can we improve the ROI of AI / ML in pharma?
The current focus in AI/ML for Pharma has been on Machine Learning, e.g. Artificial Neural Networks (ANN). While such tools are very good in classifying patterns, a lot of our knowledge does not lend itself well with this type of problem. Thus every pharma company seems to have wasted millions on the creation of advanced tools, just to discovery that the real-world value is quite limited. Many times, such systems fail as soon as they leave the proof-of-concept phase, when real data is being used.
A good complement in pharma for learning systems are rule-based approaches. Imagine, for example, the challenge to find internal experts, based on their authorship of documents. While authorship is always accurate in validated systems, the meta data for regular “file-dumps”, like shared team drives, is much more difficult. Typically, documents have been moved around, and the person that uploaded a presentation is not necessarily the author. Thus we use, for example, in Roche a rule based approach to extract proper author names. For example, for Certificates of Analysis, the author is always on the first page, bottom left, the two strings next to the “CoA” string. Or a batch record ID is always on the 2nd page in the middle. Such rules would have been very difficult to describe efficiently with a learning approach like ANNs.
But also rule based systems are not without problems. The key challenge is to reduce the problem space to a very small area, where quality is sufficient and few surprises creep up. This is called the “real-world-problem”. The world is very complicated and we, as humans, have learnt so many rules unconsciously that it’s still too much for any computer system to engage. The same holds true of the pharma world, where we are trying to find new molecules to help our patients.
One of my favourite illustrations, as to why we have to use AI / ML in a focused way is, the examples of Expert Systems in the automobile industry. Expert Systems hold the promise to substitute experts with computer systems – after all, a computer can remember facts much better than humans can, right? So, many years ago a German automobile manufacturer wanted to clone the knowledge of their best mechanics to all repair shops in the world. So they put their five best engineers, factory managers, and repair specialists in one room for a very long time and recorded everything they knew about their engines and build a true digital expert.
Yet, this system failed spectacularly. Frustrated and looking for answers the company investigated what went wrong. In one tell-tale instance, they found a mechanic in a remote area who finally got the computer up and running, and read the first instruction: use the last two digits of the engine block number and proceed to the designated manual section. But because he’d never looked for this number before, and his engine was filthy, he quickly gave up, and continued the repair as he’s always done. Of course, one could now add the instruction “please clean your engine before starting” but, there are thousands of other examples that failed this example right from the start.
That’s the real-world problem. That is why generic approaches to AI/ML fail, and our models and solutions have to be very specific to a particular use case. Yet even very focused models will lose their relevance quickly in an agile world like ours. For example, while it would be possible to record all possible inputs and outcomes to solve any problem in drug discovery but, this will never work out in practice. Some more static areas like molecule design last quite well, but many AI / ML models in other areas only have a lifespan of 2 – 3 years.
Do you envision a solution to the existential real-world problem?
No, not in the near-term. For the next few years, we will have to continue to build systems and platforms for very specific use-cases. In KM that means we have to offer clever modules, e.g. to predict potential risks of projects in the coming months, but not attempt to build “The Insight Pipeline”. In summary, there are new technologies that we can help increase efficiency, but none are good enough without a strong focus on quality, but this narrows their applicability and their usefulness. Many pharma companies would do well to review their AI / ML projects in that light, and instead refocus these activities under the real focus of knowledge management.
An interesting question remains of course, what happens when computers start to actually reason and generate insights. There’s a wide range of science fiction answers to this question, as to what will happen when computers will be smart enough to improve themselves, and their speed and capability improves exponentially from then on. What this will mean for humanity depends on your personal prerogative, from visions of paradise, to real-life Terminator. However, is the basic assumption true? Yes, it certainly is – computers will reach in the coming decades a level of sophistication that will greatly accelerate drug development. But they will not replace the connectiveness and common sense that only humans have.
It’s possible that the current pressures on pharma will also bring about a change in the way we treat our key assets: information and knowledge. While other industries hire CS / IT professionals, in pharma we still claim “the science is the hard part”, and spend billions on generating new insights, but only 0.001% on preserving it.
Do you have any key takeaways for a Knowledge Management novice?
- KM offers a huge opportunity to improve the output and quality of work
- There are a great number of new tools and capabilities in the AI/ML space that help to automate and improve quality of your knowledge streams
- However, the key challenge to any advanced methods that remains is quality – quality of our data, quality of the models, quality of the outcomes
- Embrace these advanced technologies but focus on sufficiently small use-cases to make the investment of your time and effort in tweaking these tools and processes worthwhile
- The biggest return of ROI is always in the organisational change management projects; by simplifying and harmonising data capture, storage, and exchange
- Hire real CS / IT professionals
- With KM the key challenge is organisational change: making people change the way they work. My biggest success has not been doing fancy AI but getting 6,000 people to follow a single document management process. That’s been painful, but ultimately it’s greatly increased our efficiency