Welcome to the Knowledge Management Body of Knowledge (KMBOK) site. This site represents a live mock-up of what the final KMBOK might look like. Simply click away to explore different topics.
What makes this BOK different than other BOKs?
Unlike other BOKs that require a tremendous amount of manual labor to build out and maintain, this KMBOK was automatically synthesized from KM data, using a paradigm called Data Driven Synthesis (DDS). DDS allows a computer to create massive, feature-rich, high quality web sites like this one with a small fraction of the effort required by traditional Content Management Systems, like Wordpress, Drupal, Plone and any Wikis. When the data compiler executes, it generates many different Knowledge Constructs (KCs) that include but are not limited to:
How to use this KMBOK Site
This site is simply broken down by categorical areas, which you can find in the Master Catalog that is located in the menu bar at the top of every page or in the left-side navigation pane of every page (which is a subset of the Master Catalog). You can explore KM-realted topics, views, and semantic relationships that are rendered in textual and/or graphical formats. Every topic area has its own catalog page and every catalog page presents the reader with various indexed views of data and content.
IMPORTANT: This Entire Site is Nothing But HTML Documentation
It is important to understand that this entire site is nothing more than static HTML documentation that was automatically generated by a data compiler and that has been organized and rolled up into a construct the IF4IT calls a Digital Library. There is no database or application server and there are no transactions (e.g. put, get, delete, etc.). The documentation is easily viewable from a storage device with nothing more than a common modern browser that acts as the HTML Reader. Sharing the generated web site with a broader community is achieved by publishing the entire documentation tree via a web server like Apache, usually taking a few seconds of work.
How long did it take to create this entire site?
There were three distinct phases involved in constructing this site...
Phase 1: Collecting and organizing the data
This phase of the work took a couple of hours. This work is mostly a one time effort that is done to set up content for the site. All future iterations will most likely involve no more than a few minutes work, even fore significant content changes and site structure changes. Remember, there is no traditional metamodel and every new site you generate rearranges itself based on your data (e.g. your spreadsheet tabs or your flat files).
In this phase, the content librarian simply goes to numerous web sites to collect desired information that they wish to publish. It is then saved in flat files like spreadsheet tabs to create some simple structure (one flat file per topic area or data type). These flat files are then fed to the compiler for compilation of the site.
Phase 2: Branding the Site
This is a one time process that took a few minutes to set up color schemes, about 10 minutes to create and include some logos, and about 15 minutes to create landing pages like this home page.
Phase 3: Running the Data Compiler and Synthesising the KMBOK web site
It took about seven (7) seconds to automatically generate this entire site, including all pages, many tens of thousands of HTML links, and all Knowledge Constructs (see the list, above, for examples of KCs). It took just a few more seconds to publish the entire site to a web server so that you can see it and use it.
Understanding Change Management For This Web Site
Traditional Content Management Systems (CMSs) like WordPress, Drupal, Plone and all Wikis require content librarians to manually create or change every page impacted, one at a time. The NOUNZ data compiler, on the other hand, allows content librarians to automatically create or modify web pages, in bulk.
This is achieved using a long-existing Knowledge Management paradigm called Data Drive Synthesis (DDS). Data Driven Synthesis means you use data to drive the creation (i.e. synthesis) of something else (in this case, a very large Knowledge Management web site).
Using NOUNZ, the content librarian stores and maintains content in separate flat files (like spreadsheet tabs) that are ingested by the data compiler in order to create or recreate the site. If flat files change, the librarian can simply resubmit them to the compiler and regenerate the entire site. Tens, hundreds, thousands and even many millions of pages and HTML links will automatically rebuild themselves and the site will automatically restructure itself to accommodate content changes. What would normally take many weeks or even many months of work can now be performed in seconds or minutes. Managing the site and all its changes becomes much easier because managing the content is done outside the Content Management System, in far simpler tools like spreadsheet software. Librarians spend more time focused on data/content quality and far less time on web site work. The NOUNZ data compiler takes over the bulk of the web site work.
Learn More about NOUNZ and Data Data Driven Synthesis (DDS)
You can read more about the NOUNZ Compiler at the IF4IT NOUNZ Product Page.
Contact the IF4IT for further information and details on Membership and Sponsorship for you or your enterprise.
The software and documentation associated with the IF4IT NOUNZ compiler are a Copyright of the International Foundation for Information Technology (IF4IT), as of September 2010.
Note: NOUNZ is a registered trademark of the International Foundation for Information Technology (IF4IT).
NOUNZ is a product that is created, sold and licensed by The International Foundation for Information Technology (IF4IT) and has been used to generate this Web Site.