Enterprise information portals offer a single viewpoint into a variety of systems. One of the most important of these is knowledge management. Philip Hunter explores how to
One of the claimed benefits of implementing enterprise information portals (EIPs) is facilitating knowledge management (KM), the exploitation of information and expertise for the greater corporate good. EIPs offer the prospect of a single viewpoint into the systems that enable the retention and flow of valuable corporate intelligence.
Like EIPs, KM's roots can be found in executive information systems, business process re-engineering, workflow management and business intelligence. KM is also intimately related to messaging, collaborative working, customer relationship management, enterprise resource planning and content management.
Clearly, this single view is of a complicated data landscape and not surprisingly integration between, and access to, different vintages of IT infrastructure is an important aspect of KM.
Ultimately, the best way of sharing knowledge is talking to each other, so the greatest issues associated with KM are cultural. Technology is a great facilitator, but any system will make mistakes when it comes to putting information together on the grounds of some automatically deduced context.
To be fair, vendors admit this, and are struggling to position themselves as providers of complete KM delivery services rather than purveyors of the underlying technology. "KM isn't really about technology, it's a business process first," says Steve McGibbon, CTO of Lotus. This might seem a brave admission from a company that has been touting a KM portal bundle - codenamed Raven - but McGibbon is confident that Lotus will be able to deliver the back-up training and consultancy to make KM tick.
Vendors will also have to make their products easier to use if they are to gain real mindshare within enterprises, according to Tom O'Connor, head of knowledge management at BG Plc, which is a big user of KM products.
O'Connor places considerable faith in some leading KM products, notably Raven, which he has trialled, and Excalibur's RetrievalWare, which he rates as the
best search engine around. However, he believes that KM products in general can lack consideration for end users, who may not be expert in Boolean search techniques.
One of the axioms underlying BG's internal KM system called Kite (knowledge and information to everyone), is the principle of information ownership by individuals, who should be responsible for its upkeep. "When individuals post information, we give them a template to fill in," says O'Connor. "This includes name, phone number, when the information needs to be revived or killed off the system, and where or how it can be found."
For this type of initiative to work, staff need to be given incentives to contribute to the knowledge base and keep it up to date. Although in such systems each document should have just one owner responsible for its upkeep, it may have multiple associations with individuals who hold related knowledge, while a greater number of people may be interested in sharing that knowledge. All these associations need to be captured by a KM system and be easily accessible.
Another important principle enshrined within the best KM implementations should be that they are an integral part of the working environment, both culturally and technologically. This plays to one of corporate portal and KM specialist Autonomy's strengths, as this intelligent search-enabling company has gone further than most in weaving KM into the day-to-day working fabric, rather than being an optional extra.
Cambridge, UK and San Francisco-based Autonomy Inc sells its solutions to a burgeoning number of US and European customers, including General Motors and US government agencies. Its second-quarter 2000 revenues and gross profits against the same quarter 1999 grew exponentially by 174% and 168%, respectively, with a net quarterly income of $3.4 million.
But it is only recently that Autonomy has appreciated the importance of integrating KM tightly into systems that people use for other applications. In this way, users have no choice but to use it, according to the company's director of corporate communications and KM evangelist Ian Black. "Users discovered that if they rely on people to go and look for things in KM projects, then they have already admitted to failure," he says.
Autonomy has technology well-suited to proactive KM, based on an inference engine that tries to deduce user assistance needs by analysing their recent actions, while consulting a profile of their known requirements. When KM systems are made more proactive, they are more successful, says Black.
"I was working on a PowerPoint presentation for one IT director. It so happened that Autonomy flashed up a link to a memo that the IT director had written to a number of his managers about deploying Autonomy, and it wasn't encouraging. But Autonomy had spotted that this might be relevant while I was working in PowerPoint, and I included this in my presentation. The IT director was won over immediately."
Impressive though this is - if worrying for the security of high-level corporate memos - it is only a small beginning. Such search processes are more proactive, but they are still feeding the KM system documents, and much of any enterprise's knowledge is implicit (see box, What knowledge can you manage?); in other words, locked up in people's heads, or buried within data.
Exploiting implicit knowledge - knowledge embedded within data - is a different game, but the potential for productivity improvements is just as great. This field evolved under the banner of data mining - the discovery of hidden yet valuable relationships between data. More recently, this principle has been taken up within the manufacturing industry where margins are often under pressure from factors like currency movements and high oil prices.
A pioneering user in this field is manufacturing company B2T, whose chairman Steve Hudd realised a few years ago that maintenance data produced by process control systems had great potential value for streamlining processes. To exploit this data, Hudd developed a simple measure of production line efficiency from three factors: rate of output; yield (usable product); and machinery utilisation.
This result, called occupacity, is a measure of a production line's overall output relative to its maximum potential. The key point is that process control data contains information of how this measure has varied over time as various influencing factors, such as the raw materials and ambient conditions, have changed.
B2T has developed various software modules that tap this information to tune its factory and its production lines for improved occupacity and decreased costs. One such module calculates minimum economic run length, for example, what is the minimum time it is worth having a production line running to make a given product? This equates to the minimum amount of a particular product that is worth making in one batch, and can, says Hudd, lead to conclusions that fly in the face of conventional wisdom.
Since automotive giant Toyota invented the concept in the early 1980s, the manufacturing industry has chanted the mantra of Just In Time (JIT) - meaning that rather than wasting money stockpiling materials, these should arrive just as they are needed.
But suppose a company's order book reveals a scattering of small batches of different goods scheduled to be manufactured over the next few weeks. The minimum run length calculation might reveal that some of these orders are uneconomical to fulfil. And even if they were all barely profitable, they might become significantly more so if they could all be batched together. To cater for such eventualities, the factory might need to carry some stock, contradicting the long-held wisdom of stockless JIT.
With this in mind, B2T has teamed up with Oracle for database management, and can link its manufacturing modules with the Oracle-based sales and distribution analysis calculations. In this way, it becomes possible to optimise the whole production and distribution cycle to identify customers whose orders are unprofitable because they live in a remote area, or because their orders are small or require a changeover of the production line.
These are examples of exploiting embodied tacit knowledge, where the problem is one of data mining rather than classification. The early fruits of efforts here have already been incorporated into products, with one example being the US company SeeItFirst.com, whose software enables users to zoom in to obtain a high-resolution image of a particular object. This causes the system to access a higher resolution version of the video and grab a requested frame from it, thereby delivering a better quality image.
This is one problem that the Microsoft European Research Laboratory in Cambridge, UK, is wrestling with. The essence of the problem is to identify images or shapes such as faces or cars within a frame, because having done this, the classification task is then similar to that used for text documents. The entities may be images, but they can still be classified under text headings.
According to Derek McAuley, assistant director of the Microsoft Laboratory, the neural networks technologies of the last few years were heavily oversold, using over-laboured analogies with the human brain to promote them, but they could only identify certain shapes (such as faces) in ideal conditions. Nevertheless the kind of adaptive pattern-matching techniques on which so-called neural network products were based will, in some enhanced form, be used to solve problems such as the image identification problem.
It is no surprise that more difficulties remain than have been solved in this complex area. The KM technology that will make EIPs cost-effective is evolutionary, so a consolidation of existing techniques is needed rather than a revolutionary approach.
And it is worth considering that progress in fields such as document searching and speech recognition owe more to increases in processing power and main addressable memory, than in the algorithms themselves - the technology that mimics intelligence.
What knowledge can you manage?
Knowledge is a relative rather than an absolute concept, as is information. Information may be defined as organised data, while knowledge can be defined as the result of expertise applied to information.
Consider, for example, the documentation that accompanies prescription drugs, which comprises notes on dosage and possible side effects. In terms of the application of a KM system, is this information or knowledge? The answer depends on perspective.
From the drug company's point of view, it is knowledge, because it embodies expertise in determining a usage profile on the basis of its research. But, using our definition, it is only information to doctors, who will then use their expertise to determine whether to prescribe the drug to a patient. In short, knowledge categorisation is a user-dependent process. Different knowledge types require different approaches, but while cases always arise to test the rules, it makes sense to do this within a coherent KM policy. This task can be simplified by sub-dividing KM into clear components. According to DougKalish, VP and chief knowledge officer of ecommerce consultancy Scient, KM applications should be built on four pillars:
* Content, including both internal and external sources of information. * Infrastructure, comprising the IT mechanisms, such as search engines, for accessing the content. * Process, meaning the steps by which the knowledge is created, analysed, communicated, and exploited. * Culture, meaning the attitudes to knowledge sharing, and measures taken by the company to encourage it, for example, incentives and training.
Two approaches to KM
Once a system has identified key information properties in a document, whether by affinity mapping (the Lotus method), or probabilistic inference (preferred by Autonomy and Microsoft), there still remains the problem of categorising documents.
It can be argued that documents can only be classified effectively if a system can 'understand' key words within them, so that, for example, it 'knows' that Microsoft is a software company. Then the words 'Microsoft' and 'software' can be associated, and searches for one might elicit documents citing just the other.
The alternative approach is purely statistical, involving no understanding on the part of the KM system. Instead, the system detects combinations of words that occur within documents, because it finds that any document containing one is more likely to also contain the other than a document drawn at random. This is an application of the laws of conditional probability developed by Thomas Bayes, and references to Bayesian inference crop up frequently in KM research literature. Indeed, Autonomy's inference engine is based on Bayes' theorem.
However, a purely statistical approach might yield associations that are coincidental and not particularly useful. 'Microsoft' would likely be associated statistically with the location of its headquarters, Redmond, yet this is of little value to the majority of its customers. A KM system that understands that Redmond is a place might be capable of making more intelligent associations.
Yet even when selected relevant documents have been located as a result of a search, a well-formulated Boolean search may yield a large number of documents, some of which may be very long.
Law firms understand this well, according to Mike Robinson, IT director of legal practice Bevan Ashford. The company has been using a KM system called Fulcrum from Hummingbird, which includes document summarising technology. "We were a bit sceptical at first," admits Robinson, "but it has saved us a lot of time. We tried to process a trial document manually and it took 20 to 30 minutes. Fulcrum did it in seconds, and although not quite as good, it was as near as damn it."
Oddly enough, the summarising is a statistical process. Emulating the human ability to process is well beyond present natural language processing, but Fulcrum identifies keywords and close associations, so in effect the summary is just a glorified search and collect process, but one that penetrates the full text of individual documents
Copyright 2009-2016, Douglas Kalish. All rights reserved.