WordPress database error: [Table 'llscotts_mars.wp2_categories' doesn't exist]
SELECT cat_ID FROM wp2_categories WHERE category_nicename = 'other-musings/internet'

Musings from Mars » Internet
Musings from Mars Banner Image
For Software Addicts: Yes!MaybeNah!
News Posts In Category <em></em>

News Posts In Category

June 8th, 2012

In search for civility online, is the Golden Rule the answer?

ISO civility in online comments - The Washington Post. This is a spot-on article pointing out the horrible state of interpersonal communication on the web. Nothing new, really -- it's been this way for years, but it's just getting worse. One big insight is the relationship between the "blinders on" mentality of those who troll the web and the "don't bother me with facts" mentality of the Tea Party and their ilk, like survivalists and members of the Government Paranoid. One group feeds the other, and they never read anything but what they agree with. These folks aren't looking for a conversation -- they're looking for a fight. And the rest of us must try really hard to turn the other cheek and not let the fight begin.
  • del.icio.us
  • Google
  • Slashdot
  • Technorati
  • blogmarks
  • Tumblr
  • Digg
  • Facebook
  • Mixx
January 20th, 2012

Bye Bye, Google

That's it... I'm done. Fed up. I've taken Google and shoved it off my system. Don't get me wrong... I like Google, though I like Google far less than I did 6 years ago. Its software has gotten too complicated... it's too ubiquitous... it's too intrusive... and it's too Windows-centric. I really hate Chrome, but don't have time to go into why today. I like Google Earth on my iPad, but hate it on my desktop. On the desktop, it looks just like a Windows app: Butt ugly. But what I really hate is the sneaky and intrusive way Google updates itself on my Mac. Without warning, Finder suddenly jerks me away as it loads the latest Google update and then deletes it when done. I just don't need that. I don't use Google's desktop software, so it's bye bye Keystone agent. Bye bye Google update agent. Bye bye Google Earth plugins and updates. It took me 20 minutes to finish everything, and I hope it doesn't start up again without my knowledge the next time I launch Chrome.
  • del.icio.us
  • Google
  • Slashdot
  • Technorati
  • blogmarks
  • Tumblr
  • Digg
  • Facebook
  • Mixx
Posted in:Internet, Software MusingsTags: |
July 23rd, 2011

“Just Say No To Flash”
Join The Campaign! Add A Banner To Your Website

Just Say No To Flash: Join The Campaign!In the past few years, Adobe Flash has become more than an annoyance that some of us have kept in check by using "block Flash" plugins for our web browsers. More and more, entire web sites are being built with Flash, and they have no HTML alternative at all! This goes way beyond annoying, into the realm of crippling.

I had noticed the trend building for quite awhile, but it only really hit home when I realized that Google, of all companies, had redesigned its formerly accessible Analytics site to rely heavily on Flash for displaying content. This wouldn't be absolutely horrible except for the fact that Google provides no HTML alternative. I tried to needle the company through its Analytics forums, but only received assurance that yes, indeed, one must have the Flash plugin running to view the site.

Keep in mind that content like that on Google Analytics is not mere marketing information, like the sales pitch on the Analytics home page.

Those of us who are disturbed by the trend need to be a bit more vocal about our opinion. Hence, I'm starting a "Just Say No To Flash!" campaign, with its own web page, graphics for a banner, and the CSS and HTML code to deploy it on your own web pages.

I've mentioned this to some of my family and friends, and they often come back with: "So, Why should I say no to Flash?" I admit that as a power browser and a programmer geek type who, shall we say, makes more efficient use of the web, I'm more keenly aware of the ways that Flash is chipping away at the foundation of web content.

In the beginning, it seemed harmless: Flash was an alternative to animated GIFs, and an easy way to embed movies on web pages. But then advertisers wrapped their meaty mitts around it, and that's when Flash started to be annoying. However, one could block Flash in the browser, as part of a strategy of shutting out obnoxious advertising.

But publishing content via Flash is just wrong, for a number of reasons.

It's A Proprietary Technology
. . . Or, One Company Controls The Standard

I don't think Flash is what Tim Berners-Lee had in mind when he created the first web browser and the markup language called HTML to run the web. Then, as now, the web is meant to be open to all. It is meant to be built using open standards that belong to no individual or company. The main open formats that should be used to build websites are simply:

  • HTML
  • CSS
  • JavaScript
  • Images (open formats)

Open standards for video, audio, vector graphics, virtual 3D graphics, animated graphics, and others are also available to be thrown into the mix.

Adobe PDF is also a common format for distributing final-form documents, and PDF is based on open specifications for both PDF and PostScript that Adobe published back in the 1990s.

It Isn't Backwards-Compatible
. . . Or, How Many Times Do I Have To Upgrade My *!/?#%@! Plugin?

If you install a Flash plugin today, there's no guarantee you'll be able to view Flash content created 2 months from now.

If you have a Flash plugin from 5 years ago, it's probably useless today.

Flash is designed with built-in obsolescence, forcing users to repeatedly visit the Adobe website to get an upgrade. This is not only a bother, it forces one company's advertising into the world's face every time it releases a software update.

It Can't Be Customized
. . . Or, How Do I Increase The Font Size?

From time immemorial (well, at least since the beginning of web time), a web page's text could be customized to suit the user's taste and needs. All web browsers provide the tools to increase/decrease the font size, as well as to specify custom fonts for different page elements (headers, paragraphs, etc).

Flash throws all of that out the window with a terse shrug, "Let 'Em Eat Helvetica 10pt."

Its Content Is Inaccessible
. . . Or, How Do I Drag And Drop Images and Text?

No, you can't drag and drop images or text from Flash content. This most basic method of interacting with a web page—dragging images off the page, or selecting sections of the page to drag onto an email or text processor—is a non-starter if it's part of a Flash file.

Copy and paste? If the Flash programmer has been thoughtful, you should be able to copy and paste text. But don't even try to copy any other page element.

And that includes copying a link's URL. Right-click (Ctrl-click) anywhere in a block of Flash content, and you get the standard Flash popup menu. Not very helpful.

You Can't Save The Page
. . . Or, You Mean, I Can't Save A Copy?

Another common task many web users take for granted is the ability to save a web page as text, as HTML, or as a format like rich-text format. With Flash, this is impossible.

You may be able to save the file as a web archive, but there's no open standard for a "web archive," and getting at the content inside one is almost as hard as getting inside a Flash movie.

Flash Consumes More Of Your Computer
. . . Or, Running Flash Diverts Your Processing Power and Memory

When I'm running Flash — as I am now while shopping at Adobe — my Activity Monitor shows it's consuming a continuous 5-percent of my processing power, and about 130 MB of my RAM.

For What? There's nothing a Flash movie can deliver that can't be delivered using open formats. its heavy resource drain is one reason I keep Flash turned off when browsing the web.

You Can't View Flash on an iPhone or iPad
. . . Or, I thought Apple was the bad guy here?

Apple has very good reasons for not supporting Flash on its tiny devices. As the previous point makes clear, Flash isn't a delicate, lightweight technology that your processor and RAM won't notice.

When trying to build hardware and software for small devices that work well and don't lead to memory problems or application crashes, why wouldn't you ditch unnecessary technologies like Flash?

Obviously, Steve Jobs stepped into a hornets nest here, but I think the hornets were wrong.

Make Your Site Say No To Flash

It's easy! Just follow these two steps:

1. Download the Image(s)

You can copy and save one of the following images, or download the Photoshop source and make your own.

Just Say No To Flash - Banner At Bottom Right
Just Say No To Flash - Banner At Bottom Right
2. Add the CSS

Here are two CSS styles for positioning the Just Say No To Flash banner on your web page. One positions the banner at the top-right, and the other at the bottom-right. To use the styles, just copy and paste the following code into the <HEAD> portion of your HTML.

To place the banner at the top-right corner of your page:
  1. <style>
  2. a#noFlash {
  3. position: fixed;
  4. z-index: 500;
  5. right: 0;
  6. top: 0;
  7. display: block;
  8. height: 160px;
  9. width: 160px;
  10. background: url(images/noFlashTR.png) top right no-repeat;
  11. text-indent: -999em;
  12. text-decoration: none;
  13. }
  14. </style>
To place the banner at the bottom-right corner of your page:
  1. <style>
  2. a#noFlash {
  3. position: fixed;
  4. z-index: 500;
  5. right: 0;
  6. bottom: 0;
  7. display: block;
  8. height: 160px;
  9. width: 160px;
  10. background: url(images/noFlashTR.png) bottom right no-repeat;
  11. text-indent: -999em;
  12. text-decoration: none;
  13. }
  14. </style>
3. Add the HTML

Add the following to the beginning of your HTML, just below the <BODY> tag, or at the end, just before the closing </BODY> tag:

  1. <a id="noFlash" href="http://www.musingsfrommars.org/notoflash/" title="Just Say No To Flash!"> Just Say No To Flash! </a>

Please always link your image to http://www.musingsfrommars.org/notoflash/ so everyone can find the information associated with the image.

Thanks to the "Too Cool for Internet Explorer" campaign run by w3junkies for the concept behind "Say No To Flash," as well as for the general outline of information that campaign provided.

  • del.icio.us
  • Google
  • Slashdot
  • Technorati
  • blogmarks
  • Tumblr
  • Digg
  • Facebook
  • Mixx
May 23rd, 2009

Compass: A New Concept for Managing CSS Styles

Compass Compass is an open-source project built on Rails that's currently in development. It proposes to provide a full-fledged framework for CSS stylesheets, whereby you would store data in Compass and then generate styles as needed for your various website projects. Compass also anticipates the need to use CSS as one way of including semantic data with your website.
  • del.icio.us
  • Google
  • Slashdot
  • Technorati
  • blogmarks
  • Tumblr
  • Digg
  • Facebook
  • Mixx
February 21st, 2009

Taking a Snapshot of the Semantic Web:
Mighty Big, But Still Kinda Blurry

title text

It's still somewhat difficult to get a handle on exactly what is meant by the "Semantic Web," and whether today's technologies are truly able to realize the vision of Tim Berners-Lee, who first articulated it back in 1999. From what I've read, I think there's general agreement that we aren't even close to being "there" yet, but that many of the ongoing Semantic Web activities, technologies, development platforms, and new applications are a big leap beyond the unstructured web that still dominates today.

There is a huge, seemingly endless amount of work being done by thousands of groups all trying to contribute to making the Semantic Web a reality. In my few weeks of research, I still feel as though I've just stepped my toe into that vast lake of semantic experimentation. Partly as a result of the many disparate projects, however, it does become rather difficult to see the entire forest for all the tiny trees. That said, these thousands of groups do appear to be working more or less together on the basis of consensus-based open standards, and they have set up mechanisms to keep everyone abreast of new ideas, solutions, and projects, under the general leadership of the World Wide Web Consortium (W3C)'s Semantic Web Activity. Semantic Web Stack As Envisioned by Berners-Lee As a starting point for exploration into this topic, the Wikipedia article that describes the Semantic Web Stack is quite good. Among its good overview and many useful links, the article includes the original conception of the Stack as designed by Berners-Lee. Besides cataloguing the sheer number of different projects all tackling different aspects of building a Semantic Web, it's important to distinguish ongoing projects from those that expired years ago—a distinction that's not always readily apparent to those peering in from the outside. Even excluding these, there are far too many projects to read up on in a few weeks, so this snapshot is necessarily incomplete. But after having the content reviewed by some Semantic Web experts, I'm confident it includes all the most significant threads of this new web, which, as Berners-Lee envisioned it:
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.
In my tour of the Semantic Web as it exists today, it's interesting to note that most of the projects are geared not toward machine-to-machine interaction, but rather to the traditional human-to-machine. Humans being by nature anthropocentric, the first steps being taken toward Berners-Lee's vision are to build systems that are semantically neutral with respect to human-to-human communication. Once we can reliably discuss topics without drifting off into semantic misunderstandings, then perhaps we can start teaching machines "what we mean by" ... This paper is an attempt to assess the current state of today's steps, while compiling a list of resources that would prove useful to someone thinking about building a Semantic Web application in 2009. Challenges to Building Semantic Web Applications The process of applying concepts from the Semantic Web to build richer, more knowledge-oriented applications presents developers with several, somewhat challenging prerequisites:
  • Taxonomies for the content being published,
  • Ontologies for the content, based on the developed taxonomies,
  • Content tagged using the developed ontologies,
  • Database tools for storing and serving RDF and/or OWL ontologies,
  • Database tools for connecting ontologies with the content they describe,
  • Application server specializing in querying and formatting semantic content,
  • User interface tools to present semantic content in optimum, not necessarily traditional, ways.
Ontology standards One of the base specifications for ontologies, RDF (Resource Description Framework), is a well established standard based on XML and URIs. that is the basis for all the news feeds and podcasts one can subscribe to today. DAML (DARPA Advanced Markup Language) is one of the early ontology standards built as an extension to XML and RDF. Still widely used, DAML is also the precursor to OWL. OWL (Web Ontology Language) is a sophisticated framework built on top of RDF and is perhaps the most well known and most adopted of such ontology languages. OWL is the standard adopted by the W3C (the official standards body for web specifications). At the moment, there are several different flavors of OWL, which makes adopting OWL more challenging than using RDF.
Each of these requirements present a fairly steep learning curve to developers who have not previously worked with the technologies to build Semantic Web applications. Solutions for aiding with some of the requirements exist, but it's not clear how effective they are at this stage. For example, I have listed some tools that assist in extracting semantics from unstructured documents, and others that do something similar with content stored in relational databases. On the other hand, the process of tagging unstructured content appears to have no good automated solutions. The process of building an ontology can be quite time-consuming, unless there happens to be an existing ontology you can reuse. There are several extensive online libraries of ontologies that can help. One fly in this bibliographical ointment, however, is the difficulty one may face in choosing among different, perhaps conflicting, ontologies on the same topic. It's important to emphasize that one of the first steps in building an ontology is to build a taxonomy. Although ontologies are not taxonomies, they use taxonomies as their jumping-off point. Therefore, one has to be ready and able to build a taxonomy for a subject before one can build an ontology. Many of the tools and projects included here are designed to assist with building and browsing a web of Linked Data, rather than true semantic data. Some of the demo browsers for linked data don't strike me as being particularly relevant to most end-user requirements for knowledge management. However, linked data is quite useful in integrating content from across the web, and projects built around it typically make heavy use of RDF, SPARQL, and other related specifications. Websites that make use of linked data represent the vanguard of the application of Semantic Web concepts, and their number appears to be exploding at the moment. Well-known examples of the use of linked data are websites built as "mashups," such as Google Maps. Use of Microformats and RDF Triples is also a typical component of websites that expose their content as linked data. More powerful tools exist in the form of integrated application server suites, such as the OpenLink Virtuoso server, Cyc Knowledge Server, and Intelligent Topic Manager. The first two have open source versions that can be used by developers to "dip their feet" into the task of building a Semantic Web applications using sample data. Of the two, I was most impressed with the breadth of tools in the OpenLink project, as well as with the range and vibrancy of the Virtuoso developer community. Virtuoso also comes with a rich set of user interface "widgets" that can be of great assistance in presenting semantic information appropriately. A Possible Approach To sum up, the landscape of the Semantic Web is still quite fuzzy and volatile, with many mountains of activity building up rapidly and eroding with nearly equal speed. Which landforms will remain once the evolution is complete is impossible to say here in 2008. However, the landscape is exciting to watch and flush with tantalizing experiments that will undoubtedly inspire more experimentation in the years ahead. Obviously, given all of the preceding caveats, the decision to engage in a Semantic Web experiment cannot be made lightly. One must have a clear idea of the knowledge management/presentation problem that such an experiment is designed to solve, and an understanding of the resources that will need to be devoted to the project. Although the maturity of tools, standards, and processes for such a project is quite young, it would definitely be in the interests of an organization with suitable candidate data and sufficient resources (including time) to begin an experiment of its own as a learning exercise.
What is an ontology? An ontology is a systematic description of concepts, in detail and thoroughness such that a machine encountering the concept could "understand" it. In this aspect, ontology development is closely related to research into artificial intelligence. In the past, humans have relied on complex taxonomies to describe the way abstract ideas and concrete individuals relate to one another. Ontologies differ from taxonomies in the complexity and thoroughness of describing the relationships between the elements of a taxonomy. A typical taxonomy is a tree structure that arrays terms as categories and subcategories. However, no subcategory has any notion of its relationship to its siblings, nor to any other categories elsewhere in the tree. An ontology can describe these relationships, thereby enriching one's understanding of what a given category means. Further, each category (or "concept", or "class") in the tree can have its own distinct properties. Properties describe the relationships between and among individuals in the ontology. Individuals are the specific instances of each class that the ontology needs to be able to describe. Properties are characteristics of a class that help distinguish one group of individuals from another. For example, if we have a class "job" in our ontology, with a subclass "administrative" and a further subclass "computer specialist," we could distinguish all the individuals who are computer specialists by defining the job's characteristics (properties). A computer specialist "writes software programs," "performs desktop support," "manages databases," "builds web applications," and so on. With an ontology, we could very richly define a group of individuals using such properties. This is a simplistic overview of properties… OWL provides a vast array of ways to describe properties and of the types of properties one can describe.
I would advise against a major expenditure for such an experiment, however. As noted, given the state of the technology, it strikes me as being unwise to invest a large sum in any commercial product to use as an application platform. Most of the tools that exist for building Semantic Web applications have open source licenses, so it makes sense to restrict experimentation to such tools for now. The data store chosen for such an experiment should ideally be one that currently suffers from being both fragmented and unstructured, existing in incompatible file formats and stored in different locations within the organization's Intranet—all factors that make it difficult for users to locate specific information. Given the uncertainties surrounding such an experiment, the data store chosen should also be one that is not so volatile that time pressures can cause discontinuities in content over the course of the project. Whoever undertakes such a Semantic Web experiment needs to be prepared to conclude that the effort required to bring their experiment to fruition is too great to justify the added value. Even if this were to prove true in 2009, I'm confident that the impressive swirl of activity taking place now will coalesce into truly usable techniques and tools within a few years. The standards on which the Semantic Web will be built are still evolving, but they are much more mature than the methods developers have built to turn those standards into working applications. Therefore, having gotten one's feet wet in the state of things this year will undoubtedly provide a solid foundation for building Semantic Web applications in coming years. The bulk of this report consists of a compilation of resources on various aspects of the Semantic Web and developing Semantic Web applications. The resources are divided into the following categories:
Ontology Development Tools
  • Comes in two "flavors": Version 3.4 handles both OWL and RDF ontologies, while 4.0 is geared toward the latest OWL standards only.
  • Impressive software for creating OWL ontologies.
  • User interface is well organized, given the complexity of the objects and properties you're dealing with. The interface also must handle multiple views of the information, and it does so quite well.
  • Numerous plugins for Protege make specific task work easier. There are many more plugins for Protege 3.4 than for 4.0 at this time.
  • One plugin enables database connections, with which you can import entire databases or tables, including their contents. Tables typically become OWL objects, and columns become object properties. Impressively, this tool also creates a complete form with which you can enter new instance information. Each form field can also be customized after creation.
  • Protege can also export ontologies to "OWL Document" format, which is a browsable HTML representation of the ontology.
  • Stanford is developing a web-based version of Protege. The beta URL is at Web Protege.
Protege Plug-Ins
  • OntoLT. The OntoLT approach aims at a more direct connection between ontology engineering and linguistic analysis. Used with Protege, OntoLT can automatically extract concepts (Protégé classes) and relations (Protégé slots) from linguistically annotated text collections. It provides mapping rules, defined by use of a precondition language that allow for a mapping between linguistic entities in text and class/slot candidates in Protégé. (This plug-in is only available for Protege 3.2.)
  • There are a wide array of plug-ins for Protege 3.2, and a much smaller set for 4.0. This page from the "old" Protege wiki has good links to the full library of Protege plug-ins.
  • Ontowiki is a tool providing support for agile, distributed knowledge engineering scenarios. It facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWIG for text documents. Ontowiki is built on the Powl platform. I have downloaded and installed an instance of Ontowiki on my home computer; the installation and configuration was quite simple.
Application Development Tools
The list in this section is just a small subset of the tools now available for building Semantic Web applications. There are several complete, continuously updated lists on the web, including those at SemWebCentral and the Semantic Web Company. Developer Resources
  • SemWebCentral is an Open Source development web site for the Semantic Web. It was established in January, 2004 to support the Semantic Web community by providing a free, centralized place for Open Source developers to manage Semantic Web software and content development. Another purpose is to provide resources for developers or other interested parties to learn about the Semantic Web and how to begin developing Semantic Web content and software. SemWebCentral has the following major portals:
  • Web Tools by category, a list of 148 projects organized by topic and a wide variety of other attributes.
  • Code snippets, an archive of code snippets, scripts, and functions developers have shared with the open source software community.
  • Learn About the Semantic Web, a collection of overviews, tutorials, and papers covering Semantic Web topics.
  • Programming With RDF is part of the RDF Schemas website. It has links to repositories of programmer resources by programming language, showing the kind of documentation, code, and tutorials covered by the repository.
  • Semantic Web Tools is a comprehensive list of over 700 developer tools now available for semantic-web-related projects. There are several such lists on the web, but this one is particularly good since it breaks the list down by category and language, making it much easier to narrow down the list you're interested in. This site is hosted by the Semantic Web Company.
  • Developers Guide to Semantic Web Toolkits collects links to Semantic Web toolkits for different programming languages and gives an overview about the features of each toolkit, the strength of the development effort and the toolkit's user community.
Frameworks Sesame
    • Extensions and Plugins
    • Rio, a set of parsers and writers for RDF that has been designed with speed and standards-compliance as the main concerns. Currently it supports reading and writing of RDF/XML and N-Triples, and writing of N3. Rio is part of Sesame, but can also be downloaded and used separately.
    • Elmo is a toolkit for developing Semantic Web applications using Sesame. Elmo wraps Sesame, providing a dedicated API for a number of well known web ontologies including Dublin Core, RSS and FOAF. The dedicated API makes it easier to work with RDF data for the supported ontologies. Elmo also offers a set of tools related to the supported ontologies, including an RDF crawler, a FOAF smusher and a FOAF validator.
  • Sesame is an open source Java framework for storing, querying and reasoning with RDF and RDF Schema. It can be used as a database for RDF and RDF Schema, or as a Java library for applications that need to work with RDF internally. Sesame is extremely flexible in how it's used and can work with a variety of data stores, including relational databases and native RDF files. It can be deployed as a server, or as a library incorporated into another application framework. For example, Sesame can be used simply to read a big RDF file, find the relevant information for an application, and use that information. Sesame provides the necessary tools to parse, interpret, query and store all this information, embedded in another application or, if appropriate, in a seperate database or even on a remote server. More generally, Sesame provides application developers a toolbox that contains all the necessary tools for building applications with RDF. Commercial support for Sesame is available from Aduna Software.

    Sesame also has a large ecosystem of addons and related toolsets. The following are the main links to these.

    Jess is a rule engine and scripting environment written entirely in Sun's Java language by Ernest Friedman-Hill at Sandia National Laboratories in Livermore, CA. Using Jess, you can build Java software that has the capacity to "reason" using knowledge you supply in the form of declarative rules. Jess is small, light, and one of the fastest rule engines available. Its powerful scripting language gives you access to all of Java's APIs. Jess includes a full-featured development environment based on the award-winning Eclipse platform.

    A Jess Plugin for Protege is available, integrating Jess development with your ontology.

    • ARQ, which is a query engine for Jena. ARQ supports multiple query languages (SPARQL, RDQL, and ARQ, the engine's own language), and besides Jena it can be used with general purpose engines and remote access engines. ARQ can also rewrite queries to SQL.
    • Joseki, an HTTP server-based system that support SPARQL queries. Joseki features a WebAPI for the remote query and update of RDF models, including both a client component and an RDF server. The Joseki server can run embedded in an application, as a standalone program, or as a web application inside a suitable application server (such as Tomcat). It provides the operations of query and update on models it hosts.
  • Jena is a Java framework for building Semantic Web applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine. Jena is open source and grown out of work with the HP Labs Semantic Web Programme. Important tools related to the Jena framework include:
The Owl API
    • RDF/XML parser and writer
    • OWL/XML parser and writer
    • OWL Functional Syntax parser and writer
    • Turtle parser and writer
    • KRSS parser
    • OBO Flat file format parser
    • Support for integration with reasoners such as Pellet and FaCT++
  • The OWL API is an open-source Java interface and implementation for OWL, focused towards OWL 2 which encompasses OWL-Lite, OWL-DL and some elements of OWL-Full. The OWL API was used to build Protege 4.0 and was developed by Co-Ode, the company that works with Stanford University on the Protege project. It encompasses tool for the following tasks:
    Powl is a web-based platform for building applications designed to support collaborative building and managing of ontologies. It supports many of the features of mature tools like Protege, but for web applications that can be used for team development of ontologies. Powl is an open source project that uses PHP and various RDBMS systems on the back-end. Ontowiki is an example of a collaborative application built using Powl.
Visualization and Query Tools Jambalaya OntoVista
    The University of Georgia, as described in the next section of Semantic Applications, has built a large number of interesting semantic software. OntoVista is a particularly useful ontology visualization, navigation, and query tool based on Jambalaya. OntoVista is adaptable to the needs of different domains, especially in the life sciences. The tool provides a semantically enhanced graph display that gives users a more intuitive way of interpreting nodes and their relationships. Additionally, OntoVista provides comfortable interfaces for searching, semantic edge filtering and quick-browsing of ontologies.
SWRL (Semantic Web Rule Language)
    SWRL is intended to be the rule language of the Semantic Web and is based on OWL. It allows users to write rules to reason about OWL instances and to infer new knowledge about those instances.
    Pellet is an open source, OWL DL reasoner in Java that is developed, and commercially supported, by Clark & Parsia LLC. Pellet provides standard and cutting-edge reasoning services. It also incorporates various optimization techniques described in the DL literature and contains several novel optimizations for nominals, conjunctive query answering, and incremental reasoning.

    Pronto is an extension of Pellet that enables probabilistic knowledge representation and reasoning in OWL ontologies. Pronto is distributed as a Java library equipped with a command line tool for demonstrating its basic capabilities. It is currently in development stage—more robust and mature than a mere prototype, but less mature than a production-level system like Pellet.

    Pronto offers core OWL reasoning services for knowledge bases containing uncertain knowledge; that is, it processes statements like “Bird is a subclass-of Flying Object with probability greater than 90%” or “Tweety is-a Flying Object with probability less than 5%”. The use cases for Pronto include ontology and data alignment, as well as reasoning about uncertain domain knowledge generally; for example, risk factors associated with medical conditions like breast cancer.

OWL Ontology Validator
  • This online tool, developed as part of the WonderWeb Project, attempts to validate an ontology against the different "species" of OWL. Any constructs found which relate to a particular species will be reported. In addition, if requested, the validator will return a description of the classes, properties and individuals in the ontology in terms of the OWL Abstract Syntax.
Seamark Navigator
    Seamark Navigator is part of the commercial Information Access Platform from Siderean. Navigator is the relational navigation server component,which discovers and indexes content, pre-calculates relationships and suggests paths for data exploration. Its primary architectural components include a metadata aggregator, a scalable RDF store, and a relational navigation engine, all within an industry-standard Web services interface.
Unstructured Content Mining Tools Calais
  • The Calais Web Service automatically creates rich semantic metadata for the content you submit – in well under a second. Using natural language processing, machine learning and other methods, Calais analyzes your document and finds the entities within it. Calais goes beyond classic entity identification and returns facts and events hidden within your text as well.
Cortex Competitiva Platform
  • Cortex Competitiva employs collectively both state-of-the-art text mining technologies and consolidated techniques in data mining. The main modules of the platform are Information Collection, Information Organization and Collaboration, and Information Use Analysis.
IdentiFinder Text Suite
    IdentiFinder Text Suite, a product of BBN Technologies, lets users quickly sift through documents, web pages, and email to discover relevant information. It helps solve the classic problems of text mining: First, how to identify significant documents and then, how to locate the most important information within them.
    DL-Learner is a tool from AKSW for learning concepts in Description Logics (DLs) from user-provided examples. Equivalently, it can be used to learn classes in OWL ontologies from selected objects. The goal of DL-Learner is to construc knowledge about existing data sets. With DL-Learner, users provide positive and negative examples from a knowledge base for a not yet defined concept. The goal of DL-Learner is to derive a concept definition so that when the definition is added to the background knowledge all positive examples follow and none of the negative examples follow. See also the Wikipedia entry for ILP (Inductive Logic Programming). What DL-Learner considers is the the ILP problem applied to Descriptions Logics / OWL.
Transformation Tools GRDDL
    GRDDL is a mechanism for Gleaning Resource Descriptions from Dialects of Languages. It is a technique for obtaining RDF data from XML documents and in particular XHTML pages. GRDDL provides an inexpensive set of mechanisms for bootstrapping RDF content from XML and XHTML. GRDDL does this by shifting the burden of formulating RDF away from the author to transformation algorithms written specifically for XML dialects such as XHTML. A repository of transformations is available.
    The Simile project has developed a large number of "RDFizers," which convert various file formats into RDF. This page also contains links to the many RDFizers developed by other organizations to handle even more document types.
Database Tools
Query Languages and Tools SPARQL Query Language for RDF
    SPARQL is a w3c specification for querying RDF repositories. It can be used to express queries for native RDF files or for RDF generated from stored ontologies via middleware. he results of SPARQL queries can be results sets or RDF graphs.
    Owlgres is an open source, scalable reasoner for OWL2. Owlgres combines Description Logic reasoning with the data management and performance properties of an RDBMS. Owlgres is intended to be deployed with the open source PostgreSQL database server. Owlgres’s primary service is conjunctive query answering, using SPARQL-DL.
    D2RQ is a declarative language to describe mappings between relational database schema and OWL/RDF ontologies. The D2RQ platform uses these mapping to enables applications to access RDF views on a non-RDF database through the Jena and Sesame APIs, as well as over the Web via the SPARQL Protocol and as Linked Data.
Conversion/Transformation Tools OntoSynt
    OntoSynt provides automatic support for extracting from a relational database schema its conceptual view. That is, it extracts semantics "hidden" in the relational sources by wrapping them by means of an ontology. The approach is specifically tailored for semantic information access, enabling queries over an ontology to be answered by using the data residing in its relational sources. Its web interface accepts an XML representation of an RDBMS schema, which can be generated using a tool like SQL Fairy.
    Relational.OWL is an open source application that automatically extracts the semantics of virtually any relational database and transforms this information automatically into RDF/OWL ontologies that can be processed by Semantic Web applications.
Triplify SQL Fairy
    SQL Fairy is a group of Perl modules that manipulate structured data definitions (mostly database schemas) in interesting ways, such as converting among different dialects of CREATE syntax (e.g., MySQL-to-Oracle), visualizations of schemas, automatic code generation, converting non-RDBMS files to SQL schemas (xSV text files, Excel spreadsheets), serializing parsed schemas (e.g., via XML), creating documentation (e.g., HTML), and more.
Application Servers
OpenLink Virtuoso Universal Server
  • Virtuoso, developed by OpenLink Software, is a complex product that appears to be a total solution for hosting Semantic Web applications, among other uses. In the company's words, from a recent release: "Virtuoso enables end users, systems architects, systems integrators, and developers to interact with data at the conceptual as opposed to the traditional logical level. Data about customers, suppliers, invoices, and orders, stored in existing ODBC- or JDBC-accessible database systems such as Oracle, Informix, Ingres, SQL Server, Sybase, Progress, and MySQL, can be presented in RDF form for use in Semantic Web applications."
  • Virtuoso is also available in an Open Source Edition, a very active project that includes a large number of modules for use with various content management systems. The main difference between the open source and commercial editions of Virtuoso is the Virtual Database Engine, which essentially enables an application to incorporate multiple data servers in its queries.

    Also available as open source from OpenLink is its OpenLink Ajax Toolkit (OAT), which comes with a wide range of user interface and data widgets, as well as complete applications for building data queries, designing databases, and designing web forms. The OpenLink Data Explorer is one of these standalone OAT applications. Widgets that are part of OAT include:

    The standalone applications running on the Open-Source Edition all incorporate widgets from the OAT to create quite robust, desktop-application-like tools (the username/password for all of these is demo/demo):
  • OpenLink also provides OpenLink Data Spaces (ODS), which run on the Virtuoso server, either the commercial or open-source editions. ODS enables developers to create a presence in the Semantic Web via Data Spaces derived from Weblogs, Wikis, Feed Aggregators, Photo Galleries, Shared Bookmarks, Discussion Forums and more. Data Spaces thus provide a foundation for the creation, processing and dissemination of knowledge for the emerging Semantic Web. ODS is pre-installed as part of the demonstration database bundled with the Virtuoso Open-Source Edition. Existing ODS modules include:
Cyc Knowledge Server Intelligent Topic Manager
  • Intelligent Topic Manager (ITM) is a commercial semantic software platform that enables a wide range of applications in enterprise information systems. ITM is designed to help organizations leverage, organize and model content and knowledge, to manage business reference models and taxonomies, to categorize and classify content, and to empower search. The platform consists of the following components and functionalities:
Oracle Semantic Technologies
  • Oracle Spacial 11g is an open, scalable RDF management platform. Based on a graph data model, RDF triples are persisted, indexed and queried, similar to other object-relational data types. Application developers can use the Oracle server to design and develop a wide range of semantic-enhanced business applications.
Asio Tool Suite Available from BBN, the Asio Tool Suite is focused primarily on building Semantic Web applications by integrating an enterprise's existing databases and systems without the need for complete reengineering. Designed to address the volume, variety, and exponential increase in enterprise data, the Asio Tool Suite supports information discovery via Semantic Web standards and provides for data accessibility via queries posed in a user’s own ontology. The suite further enables integration of systems by building bridges in semantic meaning from one system to another. The suite consists of the following components: Parliament
    Asio Parliament, released as open source, implements a high-performance storage engine that is compatible with the RDF and OWL standards. However, it is not a complete data management system. Parliament is typically paired with a query processor, such as Sesame or Jena, to implement a complete data management solution that incorporates SPARQL standards. In addition, Parliament includes a high-performance SWRL-compliant rule engine, which serves as an efficient inference engine. An inference engine examines a directed graph of data and adds data to it based on a set of inference rules. This enables Parliament to fill in gaps in the data automatically and transparently, inferring additional facts and relationships in the data to enrich query results.
    Asio Cartographer is a graphical ontology mapper based on SWRL. It utilizes the core functionality of BBN's Snoggle open-source mapping tool to assist in aligning OWL ontologies. It lets users visualize ontologies and then draw mappings between them on an intuitive graphical canvas. Cartographer then transforms those maps into SWRL/RDF or SWRL/XML for use in a knowledge base.
    Asio Scout provides semantic bridges to relational databases and web services that let an organization keep their existing systems in place for as long as necessary to, for example, support ongoing operations. Scout's semantic bridges act like any passive data consumer, but unlike other counterparts, their functionality— in concert with Asio Semantic Query Distribution's high-level perspective—enables consolidated knowledge discovery that wasn't previously conceivable. Scout can be used for web portals, standalone desktop applications, or web-enabled applications.
Semantic Application Demos
Browsers and Search Portals
  • Disco - Hyperdata Browser is a simple browser for navigating the Semantic Web as an unbound set of data sources. The browser renders in HTML all information that it can find on the Semantic Web about a specific resource. This resource description contains hyperlinks that allow you to navigate between resources. While you move from one resource to another, the browser dynamically retrieves information by dereferencing HTTP URIs and by following rdfs:seeAlso links.
  • Umbel Subject Concepts Explorer is a lightweight ontology structure for relating Web content and data to a standard set of subject concepts. Its purpose is to provide a fixed set of reference points in a global knowledge space. These subject concepts have defined relationships between them, and can act as binding or attachment points for any Web content or data.
  • Openlink Data Explorer is one product developed from the open-source version of the Virtuoso Universal Server product. This is the platform used by the DBPedia project, including the demos on the DBPedia page. The demo below shows the XHTML view option of a Data Viewer ontology query.
  • Zitgist DataViewer lets users browse linked data on the web, starting from an RDF or OWL ontology URL.
  • The Sindice Semantic Web Index monitors, harvests existing web data published as RDF and Microformats and makes them available under a coherent umbrella of functionalities and services. Its index of data is presented as a search portal much like Google. Sindice is created at DERI, the world’s largest institute for Semantic Web research. It is based on DERI’s unique cluster technology which indexes and operates over terascale semantic data sets (trillions of statements) while also providing very high query throughputs per cluster size. Leveraging unique cluster technologies, Sindice performs sophisticated reasoning which dramatically enhances data reusability, search precision, and recall. It obtains data by focused crawling methods which detects and focuses on metadata rich internet sources.
  • The RKB Explorer is an application built using awards data from the National Science Foundation (NSF). It has used this data to build ontologies around NSF grants, and users can search and browse the data through the Explorer. All URIs on this domain are resolvable, and search results deliver HTML or RDF, depending on the content. The browse interface provides viewing and navigating using RDF triples, and the query interface provides access using SPARQL. I discovered this useful application through a search on "NSF funding" using Sindice.
  • Marbles Linked Data Browser is a server-side application that formats Semantic Web content for XHTML clients using Fresnel lenses and formats. Colored dots are used to correlate the origin of displayed data with a list of data sources, hence the name. Marbles provides display and database capabilities for DBpedia Mobile.
  • The Cyc Foundation Concept Browser lets users search and browse the content of the OpenCyc knowledge base.
  • Brownsauce is a Semantic Web browser that lets users browse RDF files on the web. It runs as a local Java client and has a built-in Jetty web server. Brownsauce uses the Jena Semantic Web framework.
Ontology Viewers and Query Tools
  • DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia and to link other datasets on the Web to Wikipedia data. DBpedia is one of the projects developed/sponsored by AKSW. A wide variety of articles and publications about DBpedia have been published (see the Resources section of this report).
  • jSpace is a WebStart java application that demonstrates how one might search and query a given ontological database. There are several example database available to download for use with jSpace. jSpace's development was apparently inspired by mSpace. (mSpace was an innovative, but now defunct, project that attempted to merge the power of Google with the powerful interface of iTunes. Although the mSpace demo of a classical music explorer is not accessible now, it's well worth checking out the video demos of it.)
  • Owlsight is an innovative web application that uses the Google Web Toolkit and the Est JavaScript library to let users navigate OWL ontologies, browsing the relationships between classes, properties, and instances. Owlsight uses the Pellet ontoloty reasoner.
  • OpenCyc for the Semantic Web is both a project and an OWL ontology browser. Using this tool, users can access the entire OpenCyc content as downloadable OWL ontologies as well as via Semantic Web endpoints (i.e., permanent URIs). These URIs return RDF representations of each Cyc concept as well as a human-readable version when accessed via a Web Browser.
Knowledge/Content Management
  • The KiWi wiki project proposes a new approach to knowledge management that combines the wiki philosophy with the intelligence and methods of the Semantic Web. (KiWi stands for "Knowledge in a Wiki.")
  • DeepaMehta is a software platform for knowledge management. Knowledge is represented in a semantic network and is handled collaboratively. The DeepaMehta user interface is completely based on Mind Maps / Concept Maps. Instead of handling information through applications, windows and files, with DeepaMehta the user handles all kind of information directly and individually.
  • Semantic MediaWiki and SMW+are extensions to the MediaWiki platform, described elsewhere in this report.
Application Repositories
  • MIT's Simile project has been extremely creative and productive in applying concepts of linked data, RDF, and the Semantic Web generally to demonstration applications, all available as open source. (Simile is an acronym for "Semantic Interoperability of Metadata and Information in unLike Environments".) Some of its projects are included elsewhere in this report, but here is a list of some others relevant to the Semantic Web:
    • Longwell, a server application that applies concepts of faceted browsing with visualizing RDF stores.
    • PiggyBank is a Firefox add-on that enables users to develop "mashups" of web data by using "screen scrapers." The software also allows users to tag information found and embed RDF into their content.
    • RDFizers, described elsewhere in this report.
    • Referee, a server application that creates browsable RDF files from web server logs.
    • Welkin, an RDF visualizer built as a client-side java application. (Note: I couldn't get it to run on my Mac, even though MIT makes a Mac OS X disk image available.)
    • Fresnel, a vocabulary for displaying RDF.
    • Banach, a collection of operators that work on RDF graphs to infer, extend, emerge or otherwise transform a graph into another.
    • Data Collecton, a project that aims to develop a collection of RDF data sets that are generally useful for the metadata research and tools community.
  • DERI (Digital Enterprise Research Institute) International is the collection of bi-lateral agreements between like minded institutes working on the Semantic Web and Web Science. Its mission is to exploit semantics for people, organizations, and systems to collaborate and interoperate on a global scale. DERI conducts and funds research in Semantic Web technologies, conducts projects that have led to numerous prototype applications, and develops ontologies. The following are a few interesting links from DERI's Irish branch in Galway:
    • Research Clusters covering such topics as eLearning, Semantic Reality, Semantic Web Services, Industrial and Scientific Applications of Semantic Web Services, and Social Software. Each cluster has its own website and projects.
    • Research Projects, a lengthy list of ongoing projects.
    • Tools, a lengthy list of software tools available for download, typically from SourceForge.
  • University of Georgia's Large Scale Distributed Information Systems has a wide array of semantic applications available. The online repository has descriptions, downloads, and online demos. The applications cover such functions as visualization, ontology queries, ontology browsing, web services, and more.
  • 10 Semantic Apps To Watch From the ReadWriteWeb site, this is an intriguing list of new semantic-web-related applications that are now available out there. The article gives first explains what they mean by a "Semantic Application," and then briefly describes each application's innovative use of this new technology. The ten applications listed are:
  • It's also interesting to read the comments at the end of this article, many of which are from readers pointing out other semantic applications they have discovered.
Semantic Website Enhancements
Semantic Web Crawling: A Sitemap Extension
    This specification allows website managers to provide an RDF sitemap which would be visible to users browsing the Semantic Web.
    Triplify is an open-source, light-weight add-on to web applications that can read the content of the application's relational database(s) and expose their inherent semantics. According to the Triplify website, for a typical Web application a configuration for Triplify can be created in less than an hour. Triplify is based on the definition of relational database queries for a specific Web application in order to retrieve valuable information and to convert the results of these queries into RDF, JSON and Linked Data. A "triplified" web application can then provide its data to other applications on the web, enabling use of its information in "mashups."

    The Triplify project already has configurations for a variety of widely used content management systems, such as OpenConf, WordPress, Drupal, Joomla!, osCommerce, and phpBB. (The page that has links to these configurations also has a great list of other Semantic Web resources.) Triplify is one of the applications developed by AKSW. (I plan to download Triplify and integrate it in an instance of WordPress on my home computer.)

    Microformats are orthogonally related to the Semantic Web through their use of RDF-like attributes in CSS Class elements. Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards. They are highly correlated with semantic XHTML, sometimes referred to a "real world semantics", or "lossless XHTML." Microformats are designed to enable more/better structured blogging and web publishing. The Microformats site provides an array of code and tools for use in producing markup in microformats.
    RDFa in HTML is a proposed W3C specification that enables markup of RDF-like syntax into XHTML content. RDFa in XHTML provides a set of XHTML attributes to augment human-readable contenta with machine-readable hints. It enables the expression of simple and more complex datasets using RDFa, and in particular turns the existing human-visible text and links into machine-readable data without repeating content. The goals and approach of this specification are similar to that of Microformats, but it extends XHTML by use of and RDF-like syntax rather than using CSS classes.
    Exhibit is a three-tier web application framework written in Javascript, which you can use with various kinds of data files, including JSON and RDF, to produce knowledge-enhancing "mashups" like Google Maps. Exhibit creates interactive user interfaces displaying record data sets on maps, timelines, scatter plots, interactive tables, etc. Exhibit is one of the projects in knowledge management developed by MIT, partly with NSF funding.
Semantic MediaWiki
    Semantic MediaWiki (SMW) is a free extension of MediaWiki – the wiki system powering Wikipedia – that helps to search, organise, tag, browse, evaluate, and share the wiki's content. While traditional wikis contain only texts that computers can neither understand nor evaluate, SMW adds semantic annotations that bring the power of the Semantic Web to the wiki.
    SMW+ is Ontoprise's production version of the open source Semantic MediaWiki + Halo Extension software, which was originally developed as part of the 2003-04 Halo project for scientific information discovery. SMW+ makes the process of annotating wiki content much easier by adding a variety of useful interface tools, and it also helps writers research information by using the wiki's built-in ontology browser. SMW+ is designed to enable and enhance knowledge collaboration in organizations. It's available as a free download from Sourceforge, or as a reasonably priced bundled version for Windows. Ontoprise also offers service contracts for the product. The impressive detailed list of features on the Ontoprise website gives a good overview of SMW+ capabilities. These include:
    • Semantic Toolbar: Lets users create, inspect and alter semantic annotations in the wiki text without knowing the annotation syntax.
    • Advanced Annotation Mode: In this mode, wiki pages are displayed in the same way as they are displayed in the standard view mode. However, users can easily add annotations by simply highlighting the word or passage they want to annotate.
    • Ontology Browser: Allows easy navigation through the wiki's ontology without the need to access individual articles. It helps the user to understand the ontology and to keep an overview about it.
    • Question Formulation Interface: Normally, making queries against the semantic wiki involve knowing and using a complex syntax. The Question Formulation Interface provides a graphical interface that lets inexperienced users easily compose their own queries.
    • Auto completion: This tool greatly simplifies users' ability to generate annotations. With auto completion activated, users don't have to care about correct spelling of an article’s or property's name, because the tool extracts possible completions from the semantic context. For example, it checks what attribute values are possible for a particular attribute and show only these to the user. This tool is used in the wiki text editor, the semantic tool bar, the query interface and the combined search.
      ARC is an API for LAMP-based (Linux-Apache-MySQL-PHP) websites. Its goal is to reach out to the larger Web developer community, to enable the combination of efforts like microformats with the utility of selected RDF solutions such as agile data storage, run-time model changes, standardized query interfaces, and mashup chaining. ARC tries to keep things simple and flexible. All features are backed by practical use cases. One of the underlying premises of ARC is that RDF is a productivity booster that can make website implementation much faster if it's used pragmatically.

      ARC includes the following capabilities:

    • Parsers for RDF/XML, Turtle, SPARQL + SPOG, Legacy XML, HTML "tag soup," RSS 2.0, and others.
    • Serializers for N-Triples, RDF/JSON, RDF/XML, Turtle, SPOG dumps.
    • RDF Storage using MySQL with support for SPARQL queries
    • SemHTML RDF extractors for Duplin Core, eRDF, microformats, OpenID, RDFa
    • Use of remote stores, allowing the website to query remote SPARQL endpoints as if they were local stores (results are returned as native PHP arrays)
    • SPARQLScript, a SPARQL-based scripting language combined with output templating
    • Light-weight inferencing
    ARC applications and websites. Of as much interest as ARC itself are the numerous applications and extensions that have already been built with it, many of which are useful for semantically enhancing websites on their own. The following are a few examples:
      • Trice - A Semantic Web framework (still in development).
      Calais Marmoset
        Marmoset, one of several Semantic Web tools from the OpenCalais project, is a simple yet powerful tool that makes it easy for publishers to generate and embed metadata in their content in preparation for Yahoo! Search's new open developer platform, SearchMonkey, as well as other metacrawlers and semantic applications. Marmoset uses the OpenCalais web service, which can provide search engine crawlers with rich semantic data to consider when they index a site's pages. Yahoo!'s search engine can analyze this semantic data, provided in Microformats, and other search engines are likely to follow. As a result, users accessing a Marmoset-enhanced website through search engines will get better targeted results.
      Other Resources
      Ontology Libraries One of the best features of ontologies is their design for reuse. It's not clear to me what happens when you encounter a dozen ontologies for "person" or "job", etc., in the ontology libraries on the web, but it's certainly useful that you can search for existing ontologies and bring the objects you want to model into your own ontology. There are a few ontologies for commonly used objects that are nearly defacto standards now: The following is a list of other resources available for finding ontologies on specific topics:
      • Simile Ontologies This library includes those developed by MIT as part of the Simile project as well as a list of others that have been used by the project.
      • Swoogle Swoogle is a research project being carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland
      • Google Google can restrict its search to files of type "owl", as this sample search shows.
      • OntoSelect Ontology Library This library has an ontology search system with several unique and innovative features, including use of Wikipedia topics as the basis for one type of search.
      • BioPortal BioPortal is a sophisticated web application for accessing and sharing biomedical ontologies. It features several advanced search and visualization tools, as well as tools for mapping concepts between different ontologies.
      • SchemaWeb This is a comprehensive directory of RDF schemas which, in addition to typical browse-and-search interfaces, also provides an extensive set of web services to be used by software agents for processing RDF data.
      • Watson This link points to Watson's terrific web interface, which is one of the best for searching out ontologies that match your topics of interest. Watson also has a Protege plugin, but I haven't been able to make it work. The plugin, when working, would let a developer search and add classes to their ontology directly from within Protege.
      • TONES Ontology Repository This repository is primarily designed to be a central location for ontologies that might be of use to ontology tools developers for testing purposes.
      • Ping the Semantic Web Developed as a free web service by Zitgist, a company "incubated" by OpenLink, PingtheSemanticWeb (PTSW) is an archive of recently created/updated RDF documents on the web. If one of those documents is created or updated, its author can notify PTSW that the document has been created or updated by pinging the service with the URL of the document. PTSW is used by crawlers or other types of software agents to know when and where the latest updated RDF documents can be found. This dynamically updated library displays the 25 most recently updated ontologies, in real time. Using PTSW's data store, you can retrieve data on all RDF files by namespace or by class, with the option to download the files.
      Papers, Projects and Documentation
      • W3C Semantic Web Activity This portal can be thought of as the Semantic Web's "Home Page." It brings together a vast amount of primary source documentation of the Semantic Web's languages and other standard specifications, including OWL, RDF, RDFa in XHTML, and SPARQL. In addition, this portal gathers all the major ongoing projects involving the Semantic Web and the groups conducting them. The page also lists a large number of publications and presentations on Semantic Web topics.
      • Rich Tags This paper describes a proposal/project for developing a system that uses semantic tags for enhancing the searchability of web pages. (The proposal sounds similar to the W3C specification for RDFa in XHTML.)
      • Building A Semantic Website This article is a little old (2001), but has a good overview of the steps and components of building a web application using RDF ontologies.
      • TONES TONES is a European Union research project into the design and use of Thinking ONtologiES. Begun in 2005, it is scheduled to complete its work in 2008. The TONES website has links to all of the outputs of the project, including software tools and research papers. This PDF contains a 2006 presentation overview of the TONES project.
      • RapidOWL This methodology for developing OWL ontologies is based on the idea of iterative refinement, annotation and structuring of a knowledge base. A central paradigm for the RapidOWL methodology is the concentration on smallest possible information chunks. The collaborative aspect comes into play, when those information chunks can be selectively added, removed, annotated with comments or ratings. Design rationales for the RapidOWL methodology are to be light-weight, easy-to-implement, and support of spatially distributed and highly collaborative scenarios. This methodology is implemented in the OntoWiki software project.
      • Linked Data Comes of Age This very useful article clearly explains what is meant by linked data based on RDF and how it fits into the overarching vision of the Semantic Web.
      • Zitgist's Papers and Reports This is a useful list of resources on subjects relevant to Semantic Web research. The Zitgist Lab site also has a good page of documents on Best Practices for RDF.
      • RDF Schemas This site has a clear explanation of the various "vocabularies" used to develop ontologies: RDF, RDFS, OWL, and Dublic Core. The site also has a terrific list of resources for programmers.
      • Nodalities Magazine Sponsored by Talis, this free, bimonthly online magazine (released in PDF format) tries to bridge the divide between those building the Semantic Web and those interested in applying it to their business requirements. The magazine is supported by the Nodalities blog, podcasts, and Semantic Web development work.
      • DERI Papers and Reports This site contains a large collection of research papers and technical reports produced by DERI International.
      Business Resources This list includes companies I've encountered that appear to have substantial expertise in applying Semantic Web technologies to practical business requirements. BBN Technologies
        BBN is a technology company with a broad range of expertise, services, and products—including support for Semantic Web application development. As an indication of the impressive expertise of this company, BBN was the prime contractor for DARPA (Defense Advanced Research Projects Agency) in development of DAML (DARPA Agent Markup Language), which then led to their development of OWL. BBN also provides the Asio Tool Suite for third-party development and the open source Snoogle and Parliament tools.
        Cycorp is a leading provider of semantic technologies that bring intelligence and common sense reasoning to a wide variety of software applications. The Cyc software combines ontologies and knowledge bases with a powerful reasoning engine and natural language interfaces to enable the development of novel knowledge-intensive applications.
      Clark & Parsia
        Clark & Parsia is a small R&D firm—specializing in Semantic Web and advanced systems—based in Washington, DC. They have expertise in a range of semantic-web technologies, including OWL, RDF, reasoning at scale, and ontology development. They offer commercial support for Pellet, a best-of-breed Open Source OWL DL reasoner in Java, and related systems.
      Semantic Arts
        This company helps companies (medium/large with 1,000 to 10,000 employees) migrate to semantically-based SOAs (Service Oriented Architectures).
        Zitgist has a number of interesting products for viewing and querying the Semantic Web, as well as offering services for ontology development, content conversion, and web services. They also provide several open-source products for both consumer and corporate use in furthering use of the Semantic Web.
      Semantic Web Company
        The Semantic Web Company (SWC), based in Vienna, Austria, provides companies, institutions and organizations with professional services related to the Semantic Web, semantic technologies and Social Software. They provide services in consulting, education, and project management, among others.
        Talis has developed its own application development platform—the Talis Platform—and also builds Semantic Web applications for other organizations. To date, Talis' applications have been geared to meeting the needs of libraries and academic institutions.
        Semsol offers a wide range of Semantic Web-related services, from consulting and data modeling to interface design and production. Semsol is a pioneer in bringing Semantic Web technologies to widely deployed server and database environments. Semsol is the company behind development of the open-source tool ARC, as well as for several of the applications built on top of ARC, including Trice, SPARQLBot, and paggr (referenced earlier).
        Cortex's software platform and consulting business is based on their Competitiva system. Cortex’s technology proposes to mine unstructured data on the Web, using Competitiva's intelligent system to automatically convert pages and documents to a semantic format (i.e. RDF). Cortex has an R&D team working to bridge the Semantic Web gap by automatically enriching text with semantic content for themselves and their customers.
      • del.icio.us
      • Google
      • Slashdot
      • Technorati
      • blogmarks
      • Tumblr
      • Digg
      • Facebook
      • Mixx
      July 25th, 2008

      A Close-Up Look At Today’s Web Browsers: Comparing Firefox, IE 7, Opera, Safari

      My, we've come a long way in browser choices since 2005, haven't we? It's been a very heady time for programmers who dabble in the lingua franca of the World Wide Web: HTML, JavaScript, Cascading Style Sheets, the Document Object Model, and XML/XSLT. Together, this collection of scripting tools, boosted by a Browser choicestechnique with the letter-soup name "XMLHttpRequest," became known as "Ajax." Ajax spawned an avalanche of cool, useful, and powerful new web applications that are today beginning to successfully challenge traditional computer-desktop software like Microsoft Word and Excel. As good as vanguard products like Goodle's Maps, Gmail, Documents, and Calendar apps are, one only has to peek at what Apple has accomplished with its new MobileMe web apps to see how much like desktop applications web software can be in 2008.

      That this overwhelming trend toward advanced, desktop-like applications has happened at all is the result of the efforts of determined developers from the Mozilla project, which rose from the ashes of Netscape's demise to create the small, light, powerful and popular Firefox browser. The activity of the Mozilla group spurred innovation from other browser makers and eventually forced a trend towards open standards that made the emergence of Ajax possible.

      This article starts with a brief history of web browsers and then jumps into a look at the feature set of the four primary "modern" web browsers in 2008. The comparison of browser features begins by listing the core features that all these browsers have in common. The bulk of the article lists in detail "special features" of each browser and each browser's good and bad points, as they relate to the core browser characteristics. Following that, I present some recent data on the comparative performance of these browsers. The article concludes with recommendations I would make to organizations interested in making the switch from IE6 in 2008.

      1. Web Browsers in 2008: A Brief History
      2. Comparison of Browser Features
      3. Browser Performance
      4. Conclusions
      5. Bookmarks for Further Reading
      Web Browsers in 2008: A Brief History

      In 2008, web designers and programmers can finally see the light at the end of the very long, dark tunnel that began with the first browser wars of the late 1990's. That war introduced "browser incompatibility," as both Netscape and Microsoft struggled to establish their own, incompatible standards. At that time, the standards approved by the World Wide Web Consortium (w3c) were somewhat skimpy and behind the times in terms of what those companies wanted to do.

      It wasn't long before the w3c approved a standard for JavaScript, which Netscape had introduced a couple of years before, as well as a standard for CSS Level 2.0, which was to be a major advance in the "designability" of web pages. CSS 2.0 promised an end to the ubiquitous use of "font" tags, invisible graphics, and HTML tables on which designers relied to convert their ideas, typically developed using visual design tools such as Photoshop, to HTML. However, those new standards were too late, since Microsoft was making aggressive use of its monopoly on corporate desktops to promote Internet Explorer at the expense of Netscape. That effort, of course, eventually succeeded, and Microsoft was found guilty of antitrust violations (though never effectively punished for them).

      Even though IE eventually garnered a monopoly in corporate browser usage equal to Windows' monopoly as an operating system, web programmers and designers who developed content for the general public were still obliged to support two completely different and incompatible "standards," neither of which was truly standards-compliant. The dual nature of Browser market sharethe browser market caused programmers to shy away from JavaScript and CSS entirely, since it was too much of an effort to deploy them in a way that would render well on both browsers. Unfortunately, this meant that the state-of-the-web art remained stuck in 1998 until just the last couple of years, when Mozilla's Firefox and Apple's Safari browsers began slowly whittling away at IE's dominance.

      Like earlier versions of Internet Explorer, IE 6, introduced in 2001 as part of Windows XP, maintained its own set of proprietary standards that largely ignored the leadership of standards bodies like the w3c. At that time, they could IE 7 vs IE 6afford to do so since there was virtually no competition left. However, by 2004, Firefox had emerged from the open-source Mozilla group (which evolved from Netscape's decision to open-source the Netscape browser code) as a very interesting, lightweight browser that prided itself on close adherence to w3c standards.

      Meanwhile, in Europe, the Opera browser was moving in the same direction as Firefox--toward full implementation of w3c standards for JavaScript and CSS 2. In 2005, Opera became a totally free browser choice, where previously it had used advertising as a source of revenue for non-paying customers. At this point, Opera became a more significant player, which, despite its very small market share outside of Europe, continues today.

      In 2003, Apple introduced Safari 1.0 for Mac OS X, and shortly thereafter Microsoft ceased support of Internet Explorer for the Mac platform. Safari was based on the open-source code used for the Linux browser Konqueror, and in 2005 Apple released the core Safari code--its "rendering engine"--as open source through establishment of the WebKit project. Since then, the WebKit team has made rapid progress in adopting w3c standards and bringing its code base up to the state-of-the-art as defined by those standards. Safari is the dominant browser on Mac OS X, with Firefox a strong second, and the increasing market share of Mac OS X in the last couple of years has resulted in corresponding increases in the market share of Safari. Now that Safari is available for Windows and is being used for Apple's iPhone platform, Safari's market share will likely continue to rise in coming years.

      In 2007, Microsoft finally responded to the growing competition from Firefox and Safari, and released Internet Explorer 7.0 in concert with its release of Windows Vista. Although IE 7 maintains a significant lag behind the other browsers in adopting open standards, it has made important improvements over IE 6. And the early beta releases of IE 8, accompanied by assurances from Microsoft's technical engineers, suggest that IE 8 will make even more significant improvements in becoming standards-compliant.

      It is the convergence of these trends that is causing that glow at the end of the tunnel at last. With the demise of IE 6 (whose market share is rapidly collapsing), the final major remnant of the ugly browser war of 1998-2000 will be a thing of the past. Since Microsoft appears serious about getting IE 8 to market in less than the 6 years that elapsed between IE 6 and IE 7, web developers can be hopeful that their use of JavaScript, CSS, and HTML will no longer be a struggle to find the right "hack" to accommodate all the browser choices out there. At that moment, the web will finally be ready to evolve into the platform that Java aspired to, but never managed to become: A platform on which developers can build applications that are agnostic both of the user's client and of their operating system.

      That outcome is a win-win for everyone… except, perhaps, Microsoft, since it will bring to fruition the open Internet it has tried so long to keep at bay.

      The next section of this report will look in detail at the feature set of the four primary "modern" web browsers in 2008, by market share. Following that, the report presents some recent data on comparative performance for these browsers, and finally I conclude with a brief set of recommendations. The browsers have all been tested primarily on a Windows Vista Ultimate platform, and the recommendations are geared to organizations that have been relying on IE 6 or IE 7 as their default browser. Safari, Firefox, and Opera have also been tested on a Mac OS X 10.5 "Leopard" system.

      Comparison of Browser Features

      This section looks in detail at the many features that both bind and distinguish the four browsers included in this study:

      The first part of this section pulls together all of the features these four browsers have in common. This set of features can be considered a baseline that defines what a "modern" browser can do. Naturally, some of the browsers are more "modern" than others, so they go far beyond these features in distinguishing themselves from the others.

      For each browser reviewed, the write-up begins with a list of the browser's "Special Features"--that is, its features that are unique or especially distinguishing. Following that, each browser's features are listed in comparison with each other in a list of "Good Points" and "Bad Points." Each item in these lists is categorized using the set of "Baseline Features" below.

      Baseline Features
      Accessibility settings
      • Ability to define page colors and page fonts.
      • Ability to set personal style sheets.
      • Ability to easily resize fonts.
      Ad blocking
      • Ability to prevent automatic loading of page images.
      Bookmark management
      • Ability to set bookmarks for web pages visited
      • Ability to organize bookmarks into folders.
      • Ability to arrange bookmarks in a special toolbar. Toolbar can contain folders of bookmarks as well as individual links.
      • Ability to import and export bookmarks as HTML.
      Configuration management
      • All of the tested browsers support use of a proxy server and use of an automated configuration file on the network for applying browser settings.
      Connection settings
      • Ability to define proxy and SSL (secure socket layer) settings, as well as supported HTTP protocols.
      Developer tools
      • Ability to identify errors (JavaScript at a minimum) when loading a web page.
      Downloads management
      • No common features.
      History tools
      • Ability to view browser history by date and to sort history items.
      • Ability to search stored history items.
      Home page settings
      • Ability to set home page and define basics about what browser shows when opened.
      Page information details
      • Ability to view page HTML source.
      Privacy settings
      • Ability to define basic settings for cookies.
      • Ability to define how long history items are stored, or whether they're stored at all.
      RSS feed management
      • Ability to subscribe to and view RSS feeds.
      • Pages that contain RSS feed information are identified with special symbol or option.
      Search engine support
      • Web search field located in the browser toolbar.
      • Web search options include some basic customization.
      Search-in-page tools
      • Ability to find words in the current web page.
      Security settings
      • All browsers offer the ability to turn off JavaScript and plugins.
      • Ability to block pop-up ads/windows.
      • Ability to define level of encryption.
      Standards support
      • Support for HTML 4.0
      • Support for CSS 1.0
      • Support for JavaScript/EcmaScript
      • Support for DHTML
      • Support for XMLHttpRequest
      • Support for Rich Text Editing
      • Support for basic image formats (JPEG, GIF)
      Tab management
      • Support for tabbed browsing (viewing web pages in tabs rather than individual windows)
      • Ability to rearrange tabs by drag/drop
      • Ability to direct links and new pages to tabs rather than windows

      The following matrix summarizes my analysis of each browser. The "positive" aspects of each are indicated with a light-green gradient, and where the positives are exceptionally strong, you'll see a darker green gradient. Likewise, the "negative" aspects are indicated with a light-red gradient, and negative traits that are especially bad have a darker red gradient. Where the background is white, the browser basically meets the baseline expectations listed above.

      Matrix of Web Browser Functionality

      Firefox 3.0

      IE 7.0

      Opera 9.5

      Safari 3.1

      Browser Characteristics










      Bookmark management

      Configuration mgmt

      Connection settings

      Developer tools

      Downloads management

      History tools

      Home page settings

      Page information details

      Privacy settings

      RSS feed management

      Search engine support

      Search-in-page support

      Security settings

      Standards support

      Tab management


      Firefox 3.0
      Special FeaturesFirefox 3.0
      • Firefox lets users search within a page simply by typing (without invoking search function), a very useful feature.
      • Ability to tag bookmarks and history items, and to organize those items using tags.
      • Best range of add-ons that can provide a greatly expanded feature set.
      • Ability to apply "themes" to customize the browser's look and feel.
      Firefox toolbar Good Points Bookmark management
      • Firefox's import function is very good and easy to use… not only does it import bookmarks, but most other browser settings as well (cookies, history, etc.) However, on the Mac it imports only from Safari, and on Windows it supports only IE and Opera.
      • Bookmark folders offer option of opening all links at once in a single window.
      • Users can drag page links into folders on the bookmark bar directly, rather than having to visit the bookmark management page to do so. (Safari also has this feature.)
      Connection settings
      • Fine-grained tools for customizing security and connection settings.
      Download management
      • Full-featured Downloads window allows you to find downloads on your hard drive, open downloads, and search them. The window also displays the time/date of the download.
      History tools
      • Excellent history panel with lots of options for sorting/viewing and searching, as well as support for tagging history items.
      Page information details
      • Great page info panel with all the detail you'd want.
      Search engine support
      • Easy to use, customizable web-search field on toolbar, which includes optional "suggest" feature. Like IE, Firefox users can also import new search engines from a web page.
      Search-in-page tools
      • Excellent in-page search functionality.
      Security settings
      • Fine-grained tools for customizing security and connection settings.
      Standards support
      • Support for most non-basic web standards, including:
        • CSS 2.1
        • XHTML
        • PNG, SVG
        • HTML Canvas
        • DOM 1, DOM 2
        • Minimal CSS 3.0
      Tab management
      • Supports dragging URLs to tab bar to open new pages.
      • Offers the option of saving currently open tabs for the next session.
      • Firefox's Preferences window is very similar to Opera's. It has grown more complex over the years, but retains the deliberate simplicity it adopted as distinguished from the full-blown Mozilla browser it evolved from. Except for the label "Applications," its tabs are intuitive. As with the other implementations, one could argue about the emphasis placed on the various settings, but in general Firefox provides a very easy way to customize user settings.Firefox Preferences Window
      • Like Opera, Firefox is available for a wide variety of platforms, including Windows, Mac, Linux, and other Unix systems.
      • Open source code means browser improvements and security fixes come more quickly.
      Bad Points Bookmark management
      • No support for Safari bookmarks on Windows.
      Developer tools
      • Only basic developer tools in the default configuration.
      RSS feed management
      • Firefox's RSS implementation is noticeably weaker than that of the other browsers. While plugins exist to improve its support, this review looks only at the browsers' default options. One major problem with Firefox's RSS support is that if you choose to always use "Live Bookmarks" for a feed, without knowing what that means, you can't change your mind later on. Live Bookmarks are an inferior method of selecting items to read, since it doesn't show the textual summary or graphics that may be provided in the feed. Rather, it shows only the headlines. Further, I could find no way to manage my feeds by deleting them or organizing them into folders once I had subscribed. Even if you opt out of using Live Bookmarks from the get-go, Firefox places a large box at the top of each RSS feed page asking you whether you want to use Live Bookmarks. Firefox also provides no way for users to mark articles as "read," to sort or search articles, or whether to view headlines or full article summaries… options offered by all the other browsers.
      Standards support
      • Only minimal support for major CSS 3.0 features, including lack of support for resizable text fields.
      • Plugins and themes require reliance on third-party developers, who may or may not update a given plugin or theme for a new version of Firefox. They also require some user maintenance to keep updated, and users must restart the browser to install themes and plugins.
      • Like IE, Firefox does not preserve information entered on a form if you use the back button and then forward again. Anything you've entered is wiped out, unlike Opera and Safari.
      • Firefox is noticeably slower than the other browsers to launch and load the home page.
      • Firefox cannot open PDF files natively in the browser window without requiring a plugin.
      Internet Explorer 7.0
      Special FeaturesIE 7
      • IE 7 is the only browser that allows users to set more than one home page.
      • IE7's tab implementation has a feature that the other browsers could benefit from: A view showing large thumbnails of all current tabs, along with their page titles. This feature is standard in Shiira, a WebKit based browser, but not in any other browser that I know of. (There is, however, a plugin for Firefox and one for Safari that accomplishes this.)
      IE 7's toolbar Good Points Configuration management
      • IE has a large number of settings to help system administrators customize the browser configuration for users.
      Connection settings
      • Fine-grained tools for customizing security and connection settings.
      Privacy settings
      • IE 7 has fine-grained tools for customizing privacy settings.
      RSS feed management
      • A welcome addition to IE7 is its support for RSS feed subscriptions. Its implementation is quite good, and as with other browsers you can manage your subscriptions in the "Favorites/History" area.
      Search engine support
      • IE 7 adds a search field to the toolbar. It can be customized, but comes with Live Search as the default rather than Google or Yahoo (the industry leaders). You can, however, customize the choice of search engines by visiting a Microsoft website and adding items to the list. This is a very easy process.
      Security settings
      • IE 7 lets you disable each plugin individually, so you can easily disable Flash once it's installed. (However, even after installing Flash, IE 7 could not load any pages on the website I was testing.)
      • Fine-grained tools for customizing security and connection settings.
      • IE 7 includes a "phishing" blocker, which should help users identify sites that attempt to steal user passwords by appearing to be standard e-commerce websites like Amazon, eBay, or banks.
      Standards support
      • Unlike IE6 and earlier, IE7 joins the other browsers in partially supporting the PNG-24 standard, which allows designers to use images with alpha transparency.
      Tab management
      • IE can bookmark a set of tabs into a folder.
      • Design is clean and easy to understand for the most part.
      • Like Safari, IE 7 offers users the ability to email full page contents as well as page links.
      Bad Points Bookmark management Firefox Bookmarks Window Safari's Bookmarks Window
      • The "Favorites Center" is missing a couple of essential features:
      • There is no way to search your bookmarks (although you can search your history)
      • I couldn't figure out how to add folders to the list.
      • In addition, the process of changing URL's is cumbersome when compared with Safari or Firefox. For example, here is Firefox's excellent panel for managing history, tags, and bookmarks. It allows you to edit properties directly.
        Likewise, here is Safari's view for the same functions (screenshot below that for Firefox).
        By contrast, in IE you have to right-click and select a Properties window to change a URL.
      • Another shortcoming of IE's Bookmarks implementation is its inability to let users open an entire folder of links at once. This has become standard practice for awhile on modern browsers, by allowing users to quickly access a group of websites they use frequently as part of a single activity. Firefox, Safari, and Opera all offer this option.
      • IE has very basic import/export functions for bookmarks. Like Safari, it requires users to browse the hard drive for the HTML bookmarks file to import. IE offers no other import features.
      • IE is the only browser that provides no way for users to search their bookmarks.
      Developer tools
      • Only basic developer tools in the default configuration.
      Download management
      • I couldn't find a way in IE to set a folder for Download files other than the default folder.
      • IE is the only browser that provides no "Downloads" window, by which users can see the files they've downloaded and navigate to and/or open those files.
      History tools
      • I found it annoying that there is no way to view your page history without opening the Favorites window/sidebar. Firefox and Safari both have a top-level menu item called "History" that shows your visited-page history. The only tool provided is a pull-down menu adjacent to the URL address field--the same approach as Opera--but this isn't nearly as convenient or comprehensive. And unlike Opera's single-click access to history, IE 7 requires a multi-click approach, unless you keep its sidebar open all the time (which isn't as easy to do as in IE 6).
      Page information details
      • Unlike most other browsers, IE7's source code view merely launches Notepad (which it's done since IE 3) and has no method for viewing the CSS or Javascript sources or for making changes to those and viewing results in the browser window. (This feature is standard in Firefox and Safari.)
      Privacy settings
      • IE's cookie manager is buried in an "Advanced" panel within the Privacy options. Unlike the other browsers, IE offers no way to delete all cookies, delete individual cookies, search cookies, or even view stored cookies.
      Search-in-page tools
      • IE is the only browser that provides no way to identify all instances of search terms in the current web page.
      Security settings
      • Because of the very large number of malware exploits that have targeted Internet Explorer, especially since the release of IE 6 in 2001, the IE Security settings have become far too complicated for the ordinary user to comprehend. Even system administrators who wish to secure IE 7 would need to have some specialized training in order to do so.

        The biggest problem with IE, which has also been one of the main reasons for its high adoption by IT shops, is its strong support for Active X controls. Because IE is so tightly woven into the Windows operating system, Active X programs present a huge security risk, and it is largely through this channel that viruses, worms, spyware, and other malware has infected Windows PCs over the years. IE 7 has a plethora of settings designed to minimize the risk of Active X programs, but given the amount of work required both on developers--to secure their Active X programs in order to run in IE 7--and on administrators--to apply settings that strike the appropriate balance for users between usability and security, ensuring security for IE remains a negative aspect of this browser.
      Standards support
      • Visiting some web pages--for example, the home page of the National Science Foundation--I found that IE 7 could not display the Flash content, so the home page wouldn't load. Apparently, there is a problem using Flash with IE7 under Vista. I noticed the warning about IE 7's "old" Flash player when visiting a number of websites that use Flash. On Windows Vista, IE7 was the only one of the browsers tested that could not load the NSF home page in its default configuration. Though Opera, Safari, and Firefox likewise did not have the Flash plugin, they displayed the static alternative (see top screenshot) and the rest of that page instead. Since nsf.gov uses Flash for its navigation bar, this means IE 7 cannot access any page on the NSF website.
      • Broken Flash Content in IE7
      • IE is the only browser that does not fully support CSS 2.0 standards.
      • IE does not support any CSS 3.0 features.
      • IE is the only browser that supports neither SVG images nor the HTML Canvas tag.
      Tab management
      • IE 7's tab implementation is similar to the existing standard but doesn't offer as many options for managing tabs when you right-click on one of them. (For example, Safari offers the option of letting you bookmark or reload the current set of tabs.)
      • Most Windows users--and Mac users as well--will be disoriented by the absence of a menubar in the default configuration. You can add a menubar, but it doesn't appear at the top of the window as user's will expect. In this regard, IE 7 works a lot like Opera has for awhile. As a Mac user, it's interesting to note that the IE 7 model of eliminating each window's menubar is the same as the Mac OS's traditional approach. However, unlike the Mac approach, Windows Vista has no persistent, system-wide menubar, which on the Mac changes contextually depending on the currently active application. Removing the menubar from IE 7's windows is merely reducing, not enhancing, usability.
      • IE 7 departs from the standard browser design by removing the home button from the toolbar and moving the reload and "stop loading" buttons to an unexpected location. As a result, the URL field is far longer than is necessary, and the design creates a subordinate toolbar that could just as well be served by the missing menubar.
      • More than once, I was asked if I wanted to turn on "Sticky Keys," a rather annoying intrusion.
      • Users of IE6 and earlier will find using the sidebar more difficult. For one thing, there is no link to open it within the set of menu items. The sidebar opens up only by interacting with a new drop-down window that appears when you click on the Star icon (Favorites Center) at the left-hand of the tab bar (which doubles as the second toolbar).
      • IE 7's Preferences
      • IE 7's Preferences (Internet Options) window remains the worst of any major browser. It's cluttered, has nonintuitive section titles, and features an "Advanced" set of preferences that are virtually impossible to use. Why? First, the type is too small for many users to read, second, the various options are all treated as if they have equivalent importance (but they don't), and third, the view provides no explanation for what each option means. This window is the same as that in IE 6.
      • Like Firefox, IE does not preserve information entered on a form if you use the back button and then forward again. Anything you've entered is wiped out, unlike Opera and Safari.
      • IE 7 is the only one of the browsers tested that is not available for Mac OS X or any other operating system besides Windows.
      Opera 9.5
      Special FeaturesOpera 9.5
      • Ability to apply "themes" to customize the browser's look and feel. Even better than Firefox, Opera can display and apply themes in the live browser without having to be restarted.
      • A built-in, full-featured email client that integrates well with the browser content.
      • Opera has the best and most useful sidebar of any browser, and with 9.5 they've integrated it much better than before into the interface.
      • Opera's toolbar
      • A built-in Notes tool for jotting down and storing notes. The tool lets you organize and search your notes.
      • Opera lets you tag RSS feeds with "labels."
      • Opera has a large inventory of available web widgets for various purposes, similar to those in Apple's Dashboard, Yahoo's widgets, and Microsoft's "gadgets," all of which run outside the browser. Unfortunately, Opera's widgets only work when Opera is running.
      • Opera's thumbnail tab previews
      • Opera is the only one of the tested browsers that displays page
      • thumbnails of the web pages in each tab, a very useful feature.
      • The most customizable interface of any reviewed browser. Nearly every component of the interface can be rearranged, and there are a wide variety of buttons that can be added to or subtracted from each component. Further, Opera has a large stock of preset "setups" that comprise theme, button, and toolbar settings in one package.
      • A "Small Screen" view that reformats the page to emulate what a user would see on a smartphone-type display.
      • A "Links" function that pulls a list of all page links into a panel in the sidebar.
      • Opera has easily accessible tools for customizing preferences for individual websites.
      • Other unique features such as
      • Trashcan history (for pages whose tabs you've deleted),
      • "Speed dial," which lets you organize top bookmarks and see them each time you open a new tab, and
      • A print preview feature that shows the print view immediately within the browser window.
      • Robust session management, allowing you to save multiple sessions and return to them at another date.
      Good Points Accessibility
      • Opera has the most advanced and easiest to use tools for testing accessibility of any reviewed browser.
      Bookmark management
      • Great tools for managing bookmarks... good sort and search options.
      • Great new UI features... much more organized and logical from the get-go. I like the toolbar icon in upper left, and the standard home/navigation buttons with the URL field. The new Opera standard skin is also great.
      • Folders in the bookmark bar can open all bookmarks at once in a single window.
      • Opera has the most options for exporting bookmarks… either all or selected, and either HTML or formatted ASCII.
      • Powerful and simple functions for importing bookmarks from other browsers--Firefox, IE, or Konqueror on Windows, and Firefox and IE on Mac OS X.
      Connection settings
      • Fine-grained tools for customizing security and connection settings.
      Download management
      • Full-featured "Transfers" window allows you to find downloads on your hard drive, open downloads, and search them. The window also displays the time/date of the download.
      Developer tools
      • Excellent built-in tools for web developers, including a JavaScript debugger and DOM viewer.
      History management
      • Great tools for managing browsing history... good sort and search options.
      RSS feed management
      • Excellent built-in options for subscribing and viewing RSS feeds. Opera also lets you tag feeds with various "labels."
      Search engine support
      • Full customization options for the toolbar search field, although the options are not as simple as those for Firefox and IE.
      Security settings
      • Fine-grained tools for customizing security and connection settings.
      Standards support
      • Support for most non-basic web standards, including:
        • CSS 2.1
        • XHTML
        • PNG, SVG
        • HTML Canvas
        • DOM 1, DOM 2
        • Minimal CSS 3.0
      • Opera's Preferences Window Usability
      • Opera's Preferences window is well organized and reasonably simple.
      • Like Safari, Opera preserves form information you've typed in case you need to go back a page or two and return to the form again. You can use the back button to revisit earlier pages and then the forward button to return to the form, and your entered data will still be there.
      • Opera includes a feature called "Wand," which lets users store and reuse data for any of the forms they fill in on the web. This feature is similar to Safari's "Autofill," though it's more complicated to use.
      • Opera offers a synchronization feature that lets users sync their browser data across different PCs that they use. This service is similar to that offered by Safari through Apple's for-fee .Mac (soon to be renamed "MobileMe") service.
      • Like Firefox, Opera is available for a wide variety of platforms, including Windows, Mac, Linux, and other Unix systems.
      Bad Points Bookmark management
      • No support for Safari bookmarks on Windows, and import function doesn't cover history, cookies, passwords, etc. as does Firefox.
      • Opera is the only browser that doesn't have an option to bookmark all currently open tabs, though it does have powerful session management features that offer similar capabilities.
      History management
      • I found it annoying that there is no way to view your page history without opening the History sidebar. Firefox and Safari both have a top-level menu item called "History" that shows your visited-page history. The only tool provided is a pull-down menu adjacent to the URL address field--the same approach as IE 7--but this isn't nearly as convenient or comprehensive. However, at least Opera provides a single-click tool in its sidebar to access history, unlike IE 7, which requires a multi-click approach unless you keep its sidebar open all the time (which isn't as easy to do as in IE 6).
      Page information
      • Opera has no Page Info panel like Shiira or others that show in detail the resources loaded by the page.
      Search in-page
      • Opera has no advanced, in-page search capability like that of Safari or Firefox. However, you can see all instances of search terms using a function hidden in the main search field on the toolbar.
      • Opera is overly zealous in identifying "insecure" websites in its default state. It expects all web pages to be encrypted, and doesn't honor standard SSL certificates.
      • Setting many security preferences require knowledge that most web users don't possess.
      Standards support
      • Little support for up-and-coming CSS 3.0 features.
      Tab management
      • You can't drag URLs to the tab bar to open them, as you can in Safari and Firefox. There's also no contextual menu item to open the URL. Thus, the only option is copy and paste into the URL field.
      • Doesn't support drag and drop text from browser. This is a drag!
      • Dragging image from browser gave me the URL rather than an image.
      • Notes view has no formatting abilities.
      • Opera's mail client only supports ASCII text mail for formatting, though it can view HTML mail.
      • Opera cannot open PDF files natively in the browser window without requiring a plugin.
      • Some of Opera's "Advanced" preferences are not really that advanced, and I'd argue that individual tabs should be devoted to some of them rather than burying them here. For example, Opera devotes an entire tab to "Wand", which is their autofill implementation and another for "Search." Instead of these, most users would probably want to customize how the browser handles Tabs or Security more urgently. In addition, the tab labeled "Web Pages" is pretty meaningless and should probably be labeled "Appearance" or "Style" instead.
      • Opera's interface can be confusing at times… for example, if you have the "Manage bookmarks" page open and select "History" from the side panel, the "Manage bookmarks" page doesn't get replaced with the corresponding History page. This pattern recurs throughout the sidebar/full-page functions. To further confuse users, the access links/menus to full-page details for each sidebar item aren't located in equivalent places in the interface. Some are easy to find… others hard. They should all work the same way.
      Safari 3.1
      Special FeaturesSafari 3.1
      • Safari features excellent drag/drop and copy/paste to word processing documents. Such copies preserve links and formatting. To the standard RTF Mac editor, TextEdit, such drags also include images and other media. Paste or drag to Apple Mail preserves almost an identical HTML copy of the original page. By contrast, the same page copied from IE 7 to Windows Mail loses most formatting while preserving links and images, but Wordpad,Safari's Toolbar Microsoft's equivalent Rich Text editor, could only accept unformatted ASCII text. Neither Opera nor Firefox can copy and paste formatted HTML (with images) to word processing or RTF document editors. (See accompanying screenshots of the NSB home page. Shot on the left shows home page pasted into Apple Mail client. Shot on the right shows home page pasted into Windows mail.
      • Safari copy/paste web content
      • Unique features such as
        • Trackback, which makes it easy to get back to the web page that started a browsing session for a particular site (including Google searches),
        • Dragging a tab in Safari to make a new window
        • The ability to drag tabs from the tab bar to make new windows or to add them to other windows.
        • On the Mac, Safari also features "WebClip," which lets you create live "widgets" from any part of a web page. This lets you easily view a given snippet--live--at any time without loading the web page in Safari.
      • Best support of advanced CSS 3.0 features, including native support for resizable text fields. In addition, Safari adopts the following CSS 3.0 standards:
      • Border image, which lets web page designers use a single image (either tiled or stretched) to create borders around box text.
      • Box-shadow, a previously difficult--but very popular--design element that puts a drop shadow on page elements.
      • Safari supports CSS border imagesSafari supports CSS box drop shadows
      • Background-size, a technique that lets designers use a single background image for HTML page elements and resize the image as needed.
      • Multiple backgrounds, which lets designers specify multiple images to form a composite background for HTML page elements.
      • And many other advanced CSS techniques (some of which go beyond what's been drafted for CSS 3.0), including:
        • Text shadows
        • Transformations
        • Animations
        • Gradients
        • Reflections
        • Form styling
      • Support for "Private browsing," which makes it very easy to let someone else use your computer without compromising your personal information. When private browsing is turned on, webpages are not added to the history, items are automatically removed from the Downloads window, information isn't saved for AutoFill (including names and passwords), and searches are not added to the pop-up menu in the Google search box. Until you close the window, you can still click the Back and Forward buttons to return to webpages you have opened.
      • Good Points Bookmark management
      • Very easy to use, integrated window for searching and organizing bookmarks, history, and RSS feeds.
      • Folders in the bookmark bar can open all bookmarks at once in a single window.
      • Users can drag page links into folders on the bookmark bar directly, rather than having to visit the bookmark management page to do so. (Firefox also has this feature.)
      • Built-in synchronization of bookmarks through a .Mac ("MobileMe") account.
      Developer tools
      • Support for offline data storage, enabling more robust web applications by putting database info on the client rather than requiring a round-trip to the server.
      • Top-notch built-in tools for web developers, similar to the Firebug add-on that's available for Firefox and much more powerful than Opera's native JavaScript debugger.
      Download management
      • Full-featured Downloads window allows you to find downloads on your hard drive, open downloads, restart stalled downloads, and identify the download URL.
      History tools
      • Very easy to use, integrated window for searching and organizing bookmarks, history, and RSS feeds.
      Page information
        Safari's Page Inspector
      • Along with Safari's "Page Inspector," which developers can use for debugging and probing detailed information about a given page's or element's structure and metrics, you also get an amazing tool for inspecting the page's resources. Each script, CSS file, HTML component, and image is listed along with information on download times and size. Clicking on an item lets you see the file contents (images or source code). The Page Inspector also has a search feature by which you can search the entire set of data it includes. (Firefox has an add-on called Firebug that provides information very similar to Safari's Inspector… but it's not included as part of Firefox itself.)
      Privacy settings
      • Safari has the easiest, most accessible tool for emptying your browser cache. When you need to free up memory, make sure you're pulling a fresh copy of a web page, or remove the cached pages on your hard drive for privacy reasons, Safari's "Empty Cache" item in the main menu is very handy. Firefox's analogous function is called "Private Data," but without configuration in a sub-page of Firefox's preferences, this category includes a lot more data than simply the browser cache. Both Opera and IE 7 have this feature, but buried in various menus and preference panels and more obscurely named.
      RSS feed management
      • Safari pioneered integration of RSS subscriptions into the web browse, and it still has the easiest and best RSS feed manager. Some reviewers consider Safari's inability to set separate "fetch" schedules for each feed a negative attribute; however, I'm not sure why anyone would want to do this nowadays. After all, the update schedule is really determined by the publisher of the feed... not by the end user.
      • Browser Results on Acid 3 Test
      • Safari offers the option to view and subscribe to feeds through Apple Mail as well, but still use Safari when it's more convenient.
      Search in-page
      • Safari has an excellent implementation of this feature, which was pioneered by the Firefox browser.
      Standards support
      • Safari is the only browser that has passed the CSS "Acid 3" test developed by The Web Standards Project. Safari was also the first browser to pass the WSP's "Acid 2" test, which has now been conquered by all the browsers in this review except for IE 7. (See box "Acid 3 Test Results.")
      • As previously noted, Safari is far ahead of the other browsers in adopting upcoming w3c standards for CSS 3.0.
      • Safari supports the broadest range of image formats among the tested browsers. Besides the additional formats supported by Firefox and Opera, Safari also supports JPEG 2000 and TIFF images.
      Tab management
      • Safari is the only browser that lets you delete links from your bookmark bar simply by dragging them off. With the others, you can delete using a right-click action, but Safari's method is much faster since there's no menu to navigate with the mouse.
      • Supports dragging URLs to tab bar to open new pages.
      • Offers the option of saving currently open tabs for the next session.
      • Opera and Firefox both have a feature that lets you email the URL of the current page, but Safari goes one better and lets you email the entire page contents as well. IE 7.0 has this ability as well.
      • Safari has the best support for form "autofill" of any of the browsers. Opera comes in second, only because it's a bit more work to enable this feature. With autofill, Safari can fill in data on most web forms you've used before. On the Mac, Safari data is protected by a master password using the Mac OS X "Keychain" feature.
      • Preserves form information you've typed in case you need to go back a page or two and return to the form again. You can use the back button to revisit earlier pages and then the forward button to return to the form, and your entered data will still be there.
      • Safari has a feature that lets you reopen all windows from your last session.
      • On Mac OS X, Safari opens PDF files natively in the browser window without requiring a plugin, or they can be opened in the full-featured Preview application. On Windows Vista, Safari could not open PDF files in the browser window. In fact, like Firefox, IE 7, and Opera on Windows Vista Ultimate, Safari couldn't open PDF files at all without installation of the Adobe Reader.
      • Safari's Preferences Window
      • Safari has a very simple set of Preferences with 8 clearly labeled sections: General, Appearance, Bookmarks, Tabs, RSS, Autofill, Security, and Advanced. Users of the other major browsers may find the settings provided by Safari to be too sparse; however, as a Mac user I would argue that in general Windows software provides customizable settings that are far more complex than necessary. Safari provides settings for all major user requirements, without the distraction of having to decide on settings you don't really care about.
      • Safari is built on the open-source WebKit project, so, like Firefox, browser improvements and security fixes come very quickly. (The Opera team also innovates rapidly, but Microsoft's browser development has proceeded very slowly over the years.)
      Bad Points Bookmark management
      • Safari has very basic import/export functions for bookmarks. Like IE, it requires users to browse the hard drive for the HTML bookmarks file to import. Safari offers no other import features.
      Privacy settings
      • Relatively weak features for customizing privacy settings. However, Safari includes a unique "Private Browsing" option, described earlier.
      Search engine support
      • Search form on toolbar only supports Google and Yahoo. (Of course, those are the top two search engines today.)
      Security settings
      • Relatively weak features for customizing security settings.
      • Safari is the only browser that does not allow users to customize its popup-ad blocker settings.
      • On Windows Vista, I found that Safari 3.1 sometimes had issues with its window display… the window seemed to frequently require refreshing in order to display the toolbar components correctly.
      • Safari's feature set on Windows isn't quite the same as on Mac OS X. The main missing features I noticed were support for in-browser PDF files without a plugin, support for bookmark synchronization, and availability of the Webclip feature.
      • Safari is only available for Mac OS X and Windows and has no support for Linux or other Unix systems.
      Browser Performance

      Measuring the performance of web browsers is an evolving science, and it seems that new tools for this purpose come out each year. There are three main measurements that these tests concentrate on:

      • Speed of parsing JavaScript,
      • Speed of parsing CSS, and
      • Speed of loading HTML and graphics.
      ZDNet Browser Performance DataZDNet Browser Performance Data

      This section presents data from a few recent, representative studies that have analyzed these browser characteristics. Nearly all of them conclude that Safari is the fastest browser on both Windows and Mac OS X. Typically, Opera comes in second, followed by Firefox and IE 7.

      ZDNet (May 2008)

      This article, written by ZDNet staff in Germany, covers all four of the browsers reviewed in this report, looking at the performance characteristics listed above as well as measures of memory management. The article provides in-depth data on the testing equipment and methodologies used and displays numerous informative charts of the data results. The accompanying charts summarize ZDNet's data on JavaScript, CSS, and HTML page loads for each browser.

      Lifehacker Browser Performance Data
      Lifehacker (June 2008)

      Lifehacker, an award-winning technology-oriented blog, published a study of browser performance in June, looking at a variety of measurements. Its results, which are less ambiguous than those of ZDNet, are summarized in the accompanying chart.

      Web Performance Inc. Browser Performance Data
      Web Performance Inc. (October 2007)

      Web Performance, a company that produces for sale a variety of products designed to measure performance of web applications, conducted a study last October that--ironically enough--largely eschews the use of automated tools. Their tests were designed to measure performance as a typical user would perceive it. Web Performance's test concentrate exclusively on the speed with which the tested browsers load a set of predefined websites, and doesn't look specifically at JavaScript or CSS parsing. Further, its results are based on Firefox 2.0 (since 3.0 wasn't yet released) and on a beta version of Safari 3.0 (rather than 3.1). In addition, the study does not include Opera. The study's results cover load times using the browser cache as well as from the live servers, and it also presents data for load times when the browsers are pulling data from a LAN-based proxy server. The accompanying chart summarizes these results for the three tested browsers.

      Celtic Kane Browser Performance Data
      Celtic Kane (March 2008)

      From a respected web technology-related blog comes the latest in a series of tests looking at browser JavaScript speed. The author's previous tests have been widely cited and well documented. (The report page has a button that lets users run tests on their own browser to compare them to the report's benchmarks.) In the author's first test from August 2006 (before Apple had released Safari for Windows), the winner was Opera 9.0 (by a long shot), followed by IE 6 and Firefox 1.5. The previous test, from September 2007, found Opera 9.23 maintaining the lead, closely followed for the beta of Safari 3.0.3, IE 7, and--much further down the list--Firefox 2.0. The chart below summarizes results from the latest tests, conducted with the most recent browser releases in March 2008. He found that Safari 3.1 had taken the lead and was 1.5 times faster than Firefox 3.0 (a beta version), while finding that Firefox 3.0 had made an astounding performance leap over Firefox 2.0 in JavaScript parsing. The Opera 9.5 beta was nearly on a par with Firefox, while IE 7 was 3 times slower than Safari 3.1.

      Coding Horror (December 2007)

      JavaScript results from this widely-ready programmer's blog are based on the newly available SunSpider test, which by a wide consensus (based on its usage), is now considered to be the Rolls Royce of web browser JavaScript tests. One of the best things about this report is that the author takes some time to explain the meaning of the large range of individual metrics that the SunSpider test comprises. The chart below summarizes the results. A major finding that you can observe on the Coding Horror page but isn't reflected in the chart here, is that IE 7 is two times slower than Firefox 2.0 and four times slower than Opera, the front-runner in this test.

      Ars Technica (April 2008)

      In response to the recent swelling of interest in comparing the speed of Safari (and its open-source cousin, WebKit) with that of the newly released Firefox 3.0, Ars Technica used the SunSpider test to take a look recently. Their test only includes Firefox and Safari, leaving out Opera and, because it was run on an iMac, IE 7.0. Their test is one of the very few that also includes the nightly WebKit release, which typically runs several months ahead of Safari in its code base. Ars Technica found that WebKit was the fastest browser in parsing JavaScript, followed closely by Safari, and then--a good distance back--Firefox 3.0.

      Additional Test Results

      Zimbra.com: And The Winner of the Browser Wars is….

      Computerzen.com: Windows Browser Speed Shootout - IE7, Firefox2, Opera9, Safari for Windows Beta


      In nearly all of these tests, Safari is currently leading the pack on both Windows and Mac OS X systems in overall measurements of speed for loading web pages and for parsing JavaScript and CSS. For second place, the results are a mixed bag, with some studies showing Opera ahead and others showing Firefox. However, overall it appears that Firefox 3.0 has been given a major speed boost, and it tops the latest Opera release on Windows Vista. However, Opera remains significantly faster than Firefox on Mac OS X "Leopard."

      Also not contested is the browser bringing up the rear in these tests. In virtually all of the recent browser tests, IE 7 measures significantly slower than the other modern browsers, especially in tests of JavaScript performance. That said, there are some tests of HTML-load performance that show IE 7 somewhat faster than Firefox 3.0.


      From a purely objective standpoint, based on the performance characteristics and feature set of each browser in this study, I would make the following recommendations to organizations seeking to get beyond their reliance on the outdated Internet Explorer 6.0, or to offer their employees the best browsing experience today:

      1. Eliminate support for IE 6 as soon as possible, since it is a legacy browser with a dramatically inferior feature set as well as inferior performance. Originally, I had planned to include a section here that would go into detail to explain IE 6's shortcomings. However, the reader will infer from the fact that none of the recent industry studies even include IE 6 in their analyses, and from IE 6's rapidly dwindling market share, that IE 6 will be totally obsolete soon. I predict IE 6's market share will drop below 10% in 12 months.
      2. Add support for Firefox 3.0 as your organization's primary browser. Even though Firefox may not be the best browser in all categories, it is more familiar to those who have tried alternative web browsers, and its interface is not dramatically different from IE 6, so users can be migrated with minimal disruption. My only concern about Firefox is the many extensions that are available for that browser. Users will want to try these out, and it's not clear whether they will have the rights to do so in a tightly controlled network environment. Even if they do, users who have a large number of different extensions in their configuration could make support for that browser more difficult. Extensions can cause problems with the browser itself, and unknown extensions can make it more difficult for Help Desk personnel to determine the cause of problems that may arise. Extensions also increase the memory load required to support Firefox. My recommendation for this potential problem is that the organization's IT group canvas users and industry reports to determine a standard set of extensions that it will support. Beyond that, it may be wise to lock down Firefox so that users can only add further extensions with some sort of approval process.
      3. If you still run Windows XP on users' desktops, I'd strongly recommend that you make IE 7 available as a download and encourage everyone to upgrade from IE 6. However, IE 7's quirky interface will likely cause confusion among users who will already have questions about the use of tabs and RSS feeds, thereby increasing the resource cost of supporting them in such a transition. In addition, because IE 7 is so far behind the other browsers in adopting and adhering to current web standards, development of experimental web interfaces for your Intranet will be difficult. The Intranet is the best "sandbox" in which developers can try out new web technologies, adopting those that succeed in major internal web applications and rejecting those that do not. Therefore, it's very important that your primary web browser maintain parity with the state of the art in this regard.
      4. Make Safari 3.1 available as a download, both for Mac users and for Windows users who want to try it out. Safari 3.1 is, by a variety of measures, the best web browser now available, and IT organizations should make such a browser available to its employees. Safari's interface is extremely simple and easy to use, so training and help costs should be minimal. Further, Safari's inclusion in Apple's iPhone makes it an interesting platform for application development--not only for internal use but possibly for customers as well. There will be an explosion in the availability of iPhone applications this year and next, and your organization could certainly be part of that by providing tools useful to staff and customers.
      Bookmarks for Further Reading
      • del.icio.us
      • Google
      • Slashdot
      • Technorati
      • blogmarks
      • Tumblr
      • Digg
      • Facebook
      • Mixx
      April 14th, 2008

      WebKit/Safari Keep Blazing the Trail to CSS 3.0

      Cascading Style Sheets!
      Note: This article was originally published in July 2007 and has now been updated with some of the newer CSS 3.0 tricks that are now available in WebKit, the open source frameworks on which Safari is built. (Many of these tricks are also now available to users of Safari 3.1, released in March 2008.) Although the textual introduction has been updated, it is still written mostly with its original perspective from July 2007.

      A lot has happened in the world of web browsers and CSS 3.0 since I wrote this article last summer at the time Safari 3.0 became available as a public beta. Besides WebKit/Safari, Opera, iCab, Konqueror, and Firefox have all made progress in adopting CSS 3.0 specifications, the next generation of the W3C's Cascading Style Sheets standard.

      However, the WebKit team continues to lead the pack, as they have since I first contemplated this article over a year ago. In the last 6 months, that team has not only adopted more of the CSS 3.0 specs ahead of the others, but they have proposed several exciting new specs of their own, which the W3C is taking up as draft recommendations.

      In addition to updating the state of CSS 3.0 in WebKit/Safari, I've also added some new demos for the Backgrounds section of my CSS playground at the end of the article.

      Here are the CSS 3.0 features I wrote about in July 2007:

      1. Box-shadow: Yes! Add drop shadows through CSS!
      2. Multi-column layout: Can we really do this now? With HTML?
      3. Resize: Give JavaScript hacks a rest and let users relax when typing input on web pages.
      4. Rounded corners: The corners of any
        element can be made round to any radius you specify.
      5. Colors with transparency: There goes another ugly hack from way back!
      6. Background image controls: Remember how great it was when you could add images as well as colors to an element's background CSS style? Well, it's about to get a whole lot better!

      And since then, WebKit and Safari 3.1 have adopted the following bleeding-edge CSS features:

      1. Adopted last October, WebKit introduced its first take at CSS Transforms, which it has submitted to the W3C for consideration. With CSS Transforms,
        s can be scaled, rotated, skewed and translated... all without using JavaScript!
      2. Announced at the same time is the equally exciting implementation of CSS Animations. At the moment, the only type of animation that's documented and demonstrated on the WebKit blog is based on CSS Transitions, which let you define how an object or attribute changes over time from one state to another. Using this specification, you can now program many kinds of animations with CSS alone.
      3. Also in October, WebKit added the CSS Web Fonts feature, which lets designers beam fonts to users through CSS and HTML, approximating the capabilities of PDF in a much lighter-weight form.
      4. Then, after a lull, things started to heat up again last month, when Apple released Safari 3.1. Safari 3.1 incorporated all of the CSS 3.0 features WebKit had pioneered earlier, plus it added a bunch of things the WebKit team hadn't blogged about. Chief among these was support for CSS Attribute Selectors. This is something of a holy grail to advanced web developers, since it opens up a whole world of possibilities for using the Document Object Model (DOM) to build better web interfaces. When released, WebKit was the first and only browser to fully support this geeky, but highly practical feature. (Some of the other browsers have implemented partial support.)
      5. And then, just today, WebKit added support for CSS Gradients to its portfolio. Gradients are not yet a CSS 3.0 specification, but they are part of the HTML 5.0 spec. No doubt Apple's implementation will be referred to the W3C for consideration. (This is the only new feature in this list that as yet works only in the latest WebKit nightly build.)

      This article lists the CSS 3.0 features that were first available in Safari or the nightly WebKit browser. Besides listing them, I've tried to keep up with what the features can actually do for me as a web designer, so each feature is accompanied by a demo or two and some explanatory notes. Since some of the features are a bit complex, and almost totally lacking in documentation from either W3C (which only lists the standards, not the implementation details), Apple, or the WebKit team, I've had to experiment to discover what some of the attributes do.

      Fortunately, a forward-thinking group of techno-weenies is keeping a close eye on the emerging details of the CSS 3.0 implementations, and they have done some experimenting of their own. Since they're in the same boat I am (actually, they have a much better boat!), it's not surprising that I'm finding ambiguities in the way they've built some of their demos. Still, it's the closest thing to documentation that I've found, and I highly recommend that anyone interested in learning more about CSS 3.0 pay a visit to the terrific CSS3.info website. In fact, you'll find links to their pages throughout this site.

      Following CSS3.info's lead, I'm organizing the (at this time) CSS 3.0 available in Safari into four categories: Borders, Background, Effects, and User Interface. These correspond to the W3C draft modules for CSS 3.0. The fifth tab in the navigation control below gathers the CSS 3.0 specifications that have been implemented by Safari and at least one other major browser. As you browse through these up-and-coming features, I think you'll understand my excitement about the benefits they offer to web graphic- and user-interface designers.

      In the first release of this article, I only had demos for the section on Borders. Today I've added demos for CSS Backgrounds, and I plan to continue experimenting with the rest as time permits. In the meantime, as mentioned before, do pay a visit to CSS3.info for their demos of each, or follow the links to demos at the WebKit site. I hope you're inspired to take up a keyboard and pound out some experiments of your own!

      • CSS3 Borders
      • CSS3 Backgrounds
      • CSS3 Text Effects
      • CSS3 User Interface Methods
      • Other Cool CSS3 Techniques
      • del.icio.us
      • Google
      • Slashdot
      • Technorati
      • blogmarks
      • Tumblr
      • Digg
      • Facebook
      • Mixx
      December 19th, 2006

      Time To Learn More About Microformats!

      Microformats Org News Hmmm... I've been reading about Microformats for awhile now, but not until tonight did I finally "get it." It was an implementation of the hAtom microformat that caught my eye... The site was using it to generate an RSS feed, basically. Rather than posting a separate XML file as in a traditional feed, microformats let you embed tags with your HTML---typically class tags---that convey information about the structure and meaning of the page elements. Does this sound like the semantic web to anyone? Maybe it's a first step in that direction... Also, I'm bookmarking this excellent PDF "cheat sheet" that provides a summary of the tags for many of the main microformats that exist (and there are way too many...!): hCard, hCalendar, hResume, Address, Geolocation, hAtom, hReview, XFN, Rel-Tag, and more.
      • del.icio.us
      • Google
      • Slashdot
      • Technorati
      • blogmarks
      • Tumblr
      • Digg
      • Facebook
      • Mixx
      July 31st, 2006

      Protecting Windows: How PC Malware Became A Way of Life

      Article Summary

      Waving the White Flag To the Windows Virus PlagueThis is a very long article that covers several different, but related, topics. If you are interested, but don’t have time to read the entire article, here’s a summary of the main themes, with links to the sections of text that cover them:

      1. Required Security Awareness Classes Reinforce Windows Monopoly in Federal Agencies.
        For the third straight year, I’ve been forced to take online “security awareness” training at my Federal agency that includes modules entirely irrelevant–and in fact, quite insulting–to Macintosh users (myself included). The online training requires the use of Internet Explorer, which doesn’t even exist for Mac OS X and in fact is the weakest possible browser to use from a security perspective. It also reinforces the myth that computer viruses, adware, and malicious email attachments are a problem for all users, when in fact they only are a concern to users of Microsoft Windows. In presenting best practices for improved security, the training says absolutely nothing about the inherent security advantages of switching to Mac OS X or Linux, even though this is an increasingly well known and non-controversial solution. This part of the article describes the online training class and the false assumptions behind it in detail.
      2. IT Managers Are Spreading and Sustaining Myths About the Cause of the Malware Plague.
        These myths serve to protect the status quo and their own jobs at the expense of users and corporate IT dollars. None of the following “well known” facts are true, and once you realize that malware is not inevitable–at the intensity Windows users have come to expect–you realize there actually are options that can attack the root cause of the problem.
        1. Windows is the primary target of malware because it’s on 95% of the world’s desktops,
        2. Malware has worsened because there are so many more hackers now thanks to the Internet, and
        3. All the hackers attack Windows because it’s the biggest target.
        4. This section of the article describes the history of the malware plague and its actual root causes.

      3. U.S. IT Management Practices Aren’t Designed for Today’s Fast-Moving Technology Environment.
        This part of the article discusses why IT management failed to respond effectively to the disruptive plague of malware in this century, and then presents a long list of proposed “Best Practices” for today’s Information Technology organizations. The primary theme is that IT shops cover roughly two kinds of activity: (1) Operations, and (2) Development. Most IT shops are dominated by Operations managers, whose impulse is to preserve the status quo rather than investigate new technologies and alternatives to current practice. A major thrust of my proposed best practices is that the influence of operations managers in the strategic thinking of IT management needs to be minimized and carefully monitored. More emphasis needs to be accorded to the Development thinkers in the organization, who are likely to be more attuned to important new trends in IT and less resistant to and fearful of change, which is the essence of 21st century technology.

      Ah, computer security training. Don’t you just love it? Doesn’t it make you feel secure to know that your alert IT department is on patrol against the evil malware that slinks in and takes the network down every now and then, giving you a free afternoon off? Look at all the resources those wise caretakers have activated to keep you safe!

      • Virulent antivirus software, which wakes up and takes over your PC several times a day (always, it seems, just at the moment when you actually needed to type something important).
      • Very expensive, enterprise-class desktop-management software that happily recommends to management when you need more RAM, when you’ve downloaded peer-to-peer software contrary to company rules, and when you replaced the antivirus software the company provides with a brand that’s a little easier on your CPU.
      • Silent, deadly, expensive, and nosy mail server software that reads your mail and removes files with suspicious-looking extensions, or with suspicious-looking subject lines like “I Love You“, while letting creepy-looking email with subject lines like “You didnt answer deniable antecedent” or “in beef gunk” get through.
      • Expensive new security personnel, who get to hire even more expensive security contractors, who go on intrusion-detection rampages once or twice a year, spend lots of money, gum up the network, and make recommendations for the company to spend even more money on security the next year.
      • Field trips to Redmond, Washington, to hear what Microsoft has to say for itself, returning with expensive new licenses for Groove and SharePoint Portal Server (why both? why either?), and other security-related software.
      • New daily meetings that let everyone involved in protecting the network sit and wring their hands while listening to news about the latest computing vulnerabilities that have been discovered.
      • And let’s not forget security training! My favorite! By all means, we need to educate the staff on the proper “code of conduct” for handling company information technology gear. Later in the article, I’ll tell you all about the interesting things I learned this year, which earned me an anonymous certificate for passing a new security test. Yay!

      In fact, this article started out as a simple expose on the somewhat insulting online training I just took. But one thought led to another, and soon I was ruminating on the Information Technology organization as a whole, and about the effectiveness and rationality of its response to the troublesome invasion of micro-cyberorganisms of the last 6 or 7 years.

      Protecting the network

      Who makes decisions about computer security for your organization? Chances are, it’s the same guys who set up your network and desktop computer to begin with. When the plague of computer viruses, worms, and other malware began in earnest, the first instinct of these security Tzars was understandable: Protect!
                Protect the investment…
                          Protect the users…
                                    Protect the network!

      And the plague itself, which still ravages our computer systems… was this an event that our wise IT leaders had foreseen? Had they been warning employees about the danger of email, the sanctity of passwords, and the evil of internet downloads prior to the first big virus that struck? If your company’s IT staff is anything like mine, I seriously doubt it. Like everyone else, the IT folks in charge of our computing systems at the office only started paying attention after a high-profile disaster or two. Prior to that, it was business as usual for the IT operations types: “Ignore it until you can’t do so anymore.” A vulgar translation of this “code of conduct” is often used instead: “If it ain’t broke, don’t fix it.”

      Unfortunately, the IT Powers-That-Be never moved beyond their initial defensive response. They never actually tried to investigate and treat the underlying cause of the plague. No, after they had finished setting up a shield around the perimeter, investing in enterprise antivirus and spam software, and other easy measures, it’s doubtful that your IT department ever stepped back to ask one simple question: How much of the plague has to do with our reliance on Microsoft Windows? Would we be better off by switching to another platform?

      It’s doubtful that the question ever crossed their minds, but even if someone did raise it, someone else was ready with an easy put-down or three:

      1. It’s only because Windows is on 95% of the world’s desktops.
      2. It’s only because there are so many more hackers now.
      3. And all the hackers attack Windows because it’s the biggest target.

      At about this time in the Computer Virus Wars, the rallying cry of the typical IT shop transitioned from “Protect the network… users… etc.” to simply:
                  Protect Windows!

      Windows security myths

      The “facts” about the root causes of the Virus Wars have been repeated so often in every forum where computer security is discussed—from the evening news to talk shows to internal memos and water-cooler chat—that most people quickly learned to simply shut the question out of their minds. There are so many things humans worry about in 2006, and so many things we wonder about, that the more answers we can actually find, the better. People nowadays cling to firm answers like lifelines, because there’s nothing worse than an unsolved mystery that could have a negative impact on you or your loved ones.

      Only problem is, the computer security answers IT gave you are wrong. The rise of computer viruses, email worms, adware, spyware, and indeed the whole category now known as “malware” simply could not have happened without the Microsoft Windows monopoly of both PC’s and web browsing and the way the product’s corporate owners responded to the threat. In fact, the rise of the myth helped prolong the outbreak, and perhaps just made it worse, since it took Microsoft off the hook of responsibility… thus conveniently keeping the company’s consideration of the potentially expensive solutions at a very low priority.

      Nasty CyberorganismsEven though the IT managers who actually get to make decisions didn’t see this coming, it’s been several years now since some smart, brave (in at least one case, a job was lost) people raised a red flag about the vulnerability of our Microsoft “monoculture” to attack. They warned us that reliance on Microsoft Windows, and the impulse to consolidate an entire organization onto one company’s operating system, was a recipe for disaster. Because no one actually raised this warning beforehand, the folks in the mid-to-late 1990’s who were busily wiping out all competing desktops in their native habitat can perhaps be forgiven for doing so. However, IT leaders today who still don’t recognize the danger—and in fact actively resist or ignore the suggestion by others in their organization to change that policy—are being recklessly negligent with their organization’s IT infrastructure. It’s now generally accepted by knowledgeable, objective security experts that the Microsoft Windows “monoculture” is a key component that let the virus outbreak get so bad and stay around for so long. They strongly encourage organizations to loosen the reins on their “Windows only” desktop policy and allow a healthy “heteroculture” to thrive in their organization’s computer desktop environment.

      Full disclosure: I was one of the folks who warned their IT organization about the Windows security problem and urged a change of course several years ago. From a white paper delivered to my CIO in November 2002, this was one of my arguments for allowing Mac OS X into my organization as a supported platform:

      Promoting a heterogeneous computing environment is in NNN’s best interest from a security perspective. Mactinoshes continue to be far more resistant to computer viruses than Windows systems. The latest studies show that this is not just a matter of Windows being the dominant desktop operating system, but rather it relates to basic security flaws in Windows.

      About a year later, when Cyberinsecurity was released, I provided a copy to my company’s Security Officer. But sadly, both efforts fell on deaf ears, and continue to do so.

      1999: The plague begins

      The first significant computer virus—probably the first one you and I noticed—was actually a worm. The “Melissa Worm” was introduced in March 1999 and quickly clogged Usenet newsgroups, shutting down a significant number of servers. Melissa spread as a worm in Microsoft Word documents. (Note: Wikipedia now maintains a Timeline of Notable Viruses and Worms from the 1980’s to the present.)

      Now, as it so happens, 1999 was also the year when it became clear that Microsoft would win the browser war. In 1998, Internet Explorer had only 35% of the market, still a distant second to Netscape, with about 60%. Yet in 1999, Microsoft’s various illegal actions to extend its desktop monopoly to the browser produced a complete reversal: When history finished counting the year, IE had 65% of the market, and Netscape only 30%. IE’s share rose to over 80% the following year. This development is highly significant to the history of the virus/worm outbreak, yet how many of you have an IT department enlightened enough to help you switch from IE back to Firefox (Netscape’s great grandchild)? The browser war extended the growing desktop-OS monoculture to the web browser, which was the window through which a large chunk of malware was to enter the personal computer.

      Chart from Wikipedia shows browser usage for major browser types from 1994-2006.

      NCSA Mosaic Browser LogoYou see, by 1994, a year or so before the World Wide Web became widely known through the Mosaic and Netscape browsers, Microsoft had already achieved dominance of the desktop computer market, having a market share of more than 90%. A year later, Windows 95 nailed the lid on the coffin of its only significant competitor, Apple’s Macintosh operating system, which in that year had only about 9% of corporate desktops. Netscape was the only remaining threat to a true computing monoculture, since as the company had recognized, the web browser was going to become the operating system of the future.

      Microsoft’s hardball tactics in beating back Netscape led directly to the insecure computer desktops of the 2000 decade by ensuring that viruses written in “Windows DNA” would be easy to disseminate through Internet Explorer’s Active/X layer. Active/X basically let Microsoft’s legions of Visual Basic semi-developers write garbage programs that could run inside IE, and it became a simple matter to write garbage programs as Trojan Horses to infect a Windows PC. Active/X was a heckuva lot easier to write to than Netscape’s cross-platform plug-in API, which gave IE a huge advantage as developers sought to include Windows OS and MS Office functionality directly in the web browser.

      A similar strategy was taking place on the server side of the web, as Microsoft’s web server, Internet Information Server (IIS), had similarly magical tie-in’s to everybody’s favorite desktop OS. Fortunately for the business world, the guys in IT who had the job of managing servers were always a little bit brighter than the ones who managed desktops. They understood the virtues of Unix systems, especially in the realm of security. IT managers weren’t willing to fight for Windows at the server end of the business once IIS was revealed to have so many security holes. As a result, Windows, and IIS, never achieved the dominance of the server market that Microsoft hoped for, although you can be sure that the company hasn’t given up on that quest.

      The other major avenue for viruses and worms has been Microsoft Office. As noted, Melissa attacked Microsoft Word documents, but this was a fairly unsophisticated tactic compared with the opportunity presented by Microsoft’s email program, Outlook. Companies with Microsoft Exchange servers in the background and Outlook mail clients up front, which by the late 1990’s had become the dominant culture for email in corporate America, presented irresistable targets for hackers.

      Hacking in the NewsThrough the web browser, the email program, the word processor, and the web server, the opportunities for cybermischief simply multiplied. Heck, you didn’t even have to be a particularly good programmer to take advantage of all the security holes Microsoft offered, which numbered at least as many as would be needed to fill the Albert Hall (I’m still not sure how many that is).

      So… the answer to the question of why viruses and worms disproportionately took down Windows servers, networks, and desktops starting in 1999 isn’t that Microsoft was the biggest target… It was because Microsoft Windows was the easiest target.

      And the answer to why viruses and worms proliferated so rapidly in the 2000’s and with them the Windows-hacker hordes is simply that hacking Microsoft Windows became a rite of passage on your way to programmer immortality. Why try to attack the really difficult targets in the Unix world, which had already erected mature defenses by the time the Web arrived, when you could wreak havoc for a day or a week by letting your creation loose at another clueless Microsoft-Windows-dominated company? Once everyone was using both Windows and IE, spreading malware became child’s play. You could just put your code in a web page! IE would happily swallow the goodie, and once inside, the host was defenseless.

      Which leads me to the next question whose answer has been obscured in myth: Exactly why was the host defenseless? That is, why couldn’t Windows fight off viruses and worms that it encountered? It doesn’t take a physician to know the answer to that one, folks. When you encounter an organism in nature that keeps getting sick when others don’t, it’s a pretty good bet that there’s something wrong with its immune system.

      The trusting computer

      It’s not commonly known or understood outside of the computer security field that Windows represents a kind of security model called “trusted computing.” Although you’d think this model would have been thoroughly discredited by our collective experience with it over the last decade, it’s a model that Microsoft and its allies still believe in… and still plan to include in their future products such as Windows Vista. Trusted computing has a meaning that’s shifted over the years, but as embodied by Microsoft Windows variants since the beginning of the species, it means that the operating system trusts the software that gets installed on it by default, rather than being suspicious of unknown software by default.

      That description is admittedly a simplification, but this debate needs to be simplified so people can understand the difference between Windows and the competition (to the extent that Windows has competition, I’m talking about Mac OS X and Linux). The difference, which clearly explains why Windows is unable to defend itself from attack by viruses and worms, stems from the way Windows handles user accounts, compared with the way Unix-like systems, such as Linux and Mac OS X, handle them. Once you understand this, I think it will be obvious why the virus plague has so lopsidedly affected Windows systems, and it will dispel another of the myths that have been spread around to explain it.

      Windows has always been a single-user system, and to do anything meaningful in configuring Windows, you had to be set up as an administrator for the system. If you’ve ever worked at a company that tried to prevent its users from being administrators of their desktop PC’s, you already know how impossible it is. You might as well ask employees to voluntarily replace their personal computer with a dumb terminal. [Update 8/7/06: I think some readers rolled their eyes at this characterization (I saw you!). You must be one of the folks stuck at a company that has more power over its employees than the ones I've worked for in the last 20-odd years. Lucky you! I don't have data on whose experience is more common, but naturally I suspect it's not yours. No matter... this is certainly true for home users ....] And home users are always administrators by default… besides, there’s nothing in the setup of a Windows PC at home that would clearly inform the owner that they had an alternative to setting up their user accounts. (Update 8/7/06: Note to Microsoft fans who take umbrage at this characterization of their favorite operating system: Here’s Microsoft’s own explanation of the User Accounts options in Windows XP Professional.)

      The Unix difference: “Don’t trust anyone!”

      On Unix systems, which have always been multiuser systems, the system permissions of a Windows administrator are virtually the same as those granted to the “superuser,” or “root” user. In the Unix world, ordinary users grow up living in awe of the person who has root access to the system, since it’s typically only one or two system administrators. Root users can do anything, just as a Windows administrator can.

      But here’s the huge difference: A root user can give administrator access to other users, granting them privileges that let them do the things a Windows administrator normally needs to do—system administration, configuration, software installing and testing, etc—but without giving them all the keys to the kingdom. A Unix user with administrator access can’t overwrite most of the key files that hackers like to fool with—passwords, system-level files that maintain the OS, files that establish trusted relationships with other computers in the network, and so on.

      The Unix DifferenceWindows lacks this intermediate-level administrator account, as well as other finer-grained account types, primarily because Windows has always been designed as a single-user system. As a result, software that a Windows user installs is typically running with privileges equivalent to those of a Unix superuser, so it can do anything it wants on their system. A virus or worm that infects a Unix system, on the other hand, can only do damage to that user’s files and to the settings they have access to as a Unix administrator. It can’t touch the system files or the sensitive files that would help a virus replicate itself across the network.

      This crucial difference is one of the main ways in which Mac OS X and Linux are inherently more secure than Windows is. On Mac OS X, the root user isn’t even activated by default. Therefore, there’s absolutely no chance that a hacker could log in as root: The root user exists only as a background-system entity until a Mac user deliberately instantiates her, and very few people ever do. I don’t think this is the case on Linux or other Unix OS’s, but it’s one of the things that makes Mac OS X one of the most secure operating systems available today.

      There are many other mistakes Microsoft has made in designing its insecure operating system—things it could have learned from the Unix experience if it had wanted to. But this one is the doozy that all by itself puts to rest the notion that Microsoft Windows has been attacked more because people don’t like Microsoft, or because it’s the biggest target, or all the other excuses that have been promulgated.

      The security awareness class

      In response to the cybersecurity crisis, one of the steps our Nation’s IT cowards leaders have taken across the country is to purchase and customize computer security “training.” Such training is now mandatory in the Federal Government and is widely employed in the private sector. I have been forced to endure it for three years now, and I’ve had to pass a quiz at the end for the last two. As a Macintosh user, I naturally find the training offensive, because so much of it is irrelevant to me. It’s also offensive because it is the byproduct of decisions my organization’s IT management has made over the years that in my view are patently absurd. If the decisions had been mine, I would never have allowed my company to become completely dependent on the technological leadership of a single company, especially not one whose product was so difficult to maintain.

      It’s a truism to me, and has been for several years now, that Windows computers should simply not be allowed to connect to the Internet. They are too hard to keep secure. Despite the millions that have been spent at my organization alone, does anybody actually believe that our Windows monoculture is free from worry about another worm- or virus-induced network meltdown? Of course not. And why not? Why, it’s because these same IT cowards leaders think such meltdowns are inevitable.

      The inevitability of this century’s computer virus outbreaks is one of the implicit myths about their origin:

      “Why switch to another operating system, since all operating systems are equally vulnerable? As soon as the alternative OS becomes dominant, viruses geared to that OS will simply return, and we’ll have to fight all over again in an unknown environment.”

      My hope is that if you’ve been following my argument thus far, you now realize that this type of attitude is baseless, and simply an excuse to maintain the status quo.

      Indeed, the same IT cowards leaders who actually believe this are feeding Microsoft propaganda about computer security to their frightened and techno-ignorant employees through “security awareness” courses such as this. Keep in mind that, as some of the notions point out, companies attempting to train their employees in computer security are doing so not only for their office PC, but for their home PC as well. The rise of telecommuting, another social upheaval caused by the Internet’s easy availability, means that the two are often the same nowadays. So the lessons American workers are learning are true only if they have Windows computers at home, and only if Windows computers are an inevitable and immutable technology in the corporate landscape, like desks and chairs.

      Here are some of the things I learned from my organization’s “Computer Security Awareness” class:

      This computer security online training requires Internet Explorer.

      1. Always use Internet Explorer when browsing the web.
        How many times must employees beg their companies to use Firefox, merely because it’s faster and has better features, before they will listen? In the meantime, to ensure that as many viruses and worms can enter the organization as possible, so that the expensive antivirus software we’ve purchased has something to do, IT management makes sure that as many people continue using IE as possible. I’m being facetious here. The reason they do this is that it’s what the training vendor told them to say, and today’s Federal IT managers always do as instructed by their contractors.

        While you can find data on the web to support the view that IE is at least as secure as Firefox, common sense should guide your decisionmaking here rather than the questionable advice of dueling experts. The presence of Active/X in IE, all by itself, should be enough to make anyone in charge of an organization’s security jump up and down to keep IE from being the default browser. And that’s not even usually listed as a vulnerability, because it’s no longer “new”. Students learned to fear the kinds of files Windows users exchange on a day-to-day basis. The “shootouts” that you read now and then pertain to new vulnerabilities that are found, and to the tally of vulnerabilities a given browser maker has “fixed”… not to inherent architectural vulnerabilities like Active/X and JScript (Microsoft’s proprietary extension to JavaScript).

      2. Use Windows computers at home.
        The belief among IT management in recent years is that if we can get everyone to use the same desktop “image” at work and at home, we can control the configuration and everything will be better. Um, no. Mac users don’t have any fear of these strange Windows file types, and organizations that encourage users to switch to Mac OS X or to Linux, instead of discouraging such switching, immediately improve their security posture. For example, here’s some recent advice from a security expert at Sophos:
        “It seems likely that Macs will continue to be the safer place for computer users for some time to come.”

        And from a top expert at Symantec comes this recent news:

        Simply put, at the time of writing this article, there are no file-infecting viruses that can infect Mac OS X… From the 30,000 foot viewpoint of the current security landscape, … Mac OS X security threats are almost completely lost in the shadows cast by the rocky security mountains of other platforms.

      3. All computers on the Internet can be infected within 30 minutes if not protected.
        The course taught us that all computers need to be "configured" to be secure and that otherwise, they would be infected by a virus in 30 minutes on the web.No… of all currently available operating systems, this is true only of Microsoft Windows. Mac OS X is an example of a Unix system that’s been designed to use the best security features of the Unix platform by default, and no user action or configuration is required to ensure this.
        Here’s one of the URL’s (from the SANS Institute) that the course provided, which actually makes pretty clear that Windows systems are the most insecure computers you can give your employees today: Computer Survival History.
      4. Spyware is a problem for all computers.
        I imagine that spyware is the most crippling These instructions on viruses assume that the employee runs Windows at home.day-to-day aspect of using Windows. My son insisted on trying Virtual PC a couple of years ago, and on his own, his virtual Windows XP was completely unusable because of malware of various kinds within about 20 minutes. He was using Internet Explorer, of course, because that’s what he had on his computer. I installed Firefox for him, and his web surfing in Windows has been much smoother since then. He still has to run antivirus and antiadware software to keep the place “clean,” but needless to say, he has never asked to use IE again. This experience alone demonstrated what I had already read to be true: The web is not a safe place in the 21st century if you’re using Windows. This is one of the primary reasons I use Mac OS X: In all the 5 years I’ve used Mac OS X, I have never once encountered adware. And that has absolutely nothing to do with what websites I surf, or don’t surf, on the web. (And that’s all I’m going to say about it!)
      5. Viruses are a threat to all home computers.
        What I said previously about adware, The course taught me to be afraid and wary when using the Internet at home.ditto for computer viruses. To this day, there is not a single virus that has successfully infected a Mac OS X machine. (The one you heard about earlier this year was a worm, not a virus, and it only affected a handful of Macs, doing very little damage in any case.) As even Apple will warn you, that doesn’t mean it’s impossible and will never happen. However, it does mean that if Macs rise up and take over the world, amateur virus writers will all have to retire, and you’ll cut the supply line of new virus hackers to the bone. Without Windows to hack, it simply won’t be fun anymore. No quick kills. No instant wins. Creating a successful virus for Mac OS X will take years, not days. Human nature being what it is, I just know there aren’t many hackers who would have the patience for that.

        A huge side benefit for Mac users in not having to worry about viruses and worms is that you don’t have to run CPU-sucking antivirus software constantly. Scheduling it to run once a week wouldn’t be a bad idea, but you can do that when you’re sleeping and not have to suffer the annoying slowdowns that are a fact of PC users’ lives every time those antivirus hordes sally forth to fight the evil intruders. Or… you could disconnect your Windows PC from the Internet, and then you could turn that antivirus/antispyware thingy off for good.

      6. Apparently, you have to be really careful when opening email attachments, since they might attack your computer.Malicious email attachments are a threat to all.
        **Y A W N** Can we go home now?
        Sometimes, I open evil Windows attachments just for the fun of it… to show that I can do so with impunity. Then I send them on to the Help Desk to study.:-) (Just kidding.)

      Change resisters in charge

      Other than Microsoft, why would anyone with a degree in computer science or otherwise holding the keys to a company’s IT resources want to promulgate such tales and ignore the truth behind the virus plague? That’s a simple one: They fear change.

      To admit that Windows is fundamentally flawed and needs to be replaced or phased out in an organization is to face the gargantuan task of transitioning a company’s user base from one OS to another. In most companies, this has never been done, except to exorcise the stubborn Mac population. Although its operating system is to blame for the millions of dollars a company typically has had to spend in the name of IT security over the last 5 years, Microsoft represents a big security blanket for the IT managers and executives who must make that decision. Windows means the status quo… it means “business as usual”… it means understood support contracts and costs. All of these things are comforting to the typical IT exec, who would rather spend huge amounts of his organization’s money and endure sleepless nights worrying about the next virus outbreak than to seriously investigate the alternatives.

      Change Resisters In CommandManagers like this, who have a vested interest in protecting Microsoft’s monopoly, are the main source of the Windows security myths, and it’s a very expensive National embarrassment. The IT organization is simply no place for people who resist change, because change is the very essence of IT. And yet, the very nature of IT operations management has ensured that change-resisters predominate.

      Note that I said IT operations. As a subject for a future article, I would very much like to elaborate on my increasingly firm belief that IT management should never be handed to the IT segment that’s responsible for operations—for “keeping the trains running.” Operations is an activity that likes routines, well defined processes, and known components. People who like operations work have a fondness for standard procedures. They like to know exactly which steps to take in a given situation, and they prefer that those steps be written down and well-thumbed.

      By contrast, the developer side of the IT organization is where new ideas originate, where change is welcomed, where innovation occurs. Both sides of the operation are needed, but all too often the purse strings and decisionmaking reside with the operations group, which is always going to resist the new ideas generated by the other guys. In this particular situation, solutions can only come from the developer mindset, and organizations need to learn how to let the developer’s voice be heard above the fearful, warning voices of Operations.

      Custer’s last stand… again

      So please, Mr. or Ms. CIO, no more silly security training that teaches me how to [try to] keep secure an operating system I don’t use, one that I don’t want to use, and one that I wish to hell my organization wouldn’t use. Please don’t waste any more precious IT resources spreading myths about computer security to my fellow staffers, all the while ignoring every piece of advice you receive on how to make fundamental improvements to our network and desktop security, just because the advice contradicts what you “already know.”

      It really is true that switching from Windows to a Unix-based OS will make our computers and network more secure. I recommend switching to Mac OS X only because it’s got the best designed, most usable interface to the complex and powerful computing platform that lies beneath its attractive surface. Hopefully, Linux variants like Ubuntu will continue to thrive and provide Apple a run for its money. The world would be a much safer place if the cowards leaders who make decisions about our computing desktop would wake up, get their heads out of the sand, smell the roses, and see Microsoft Windows for what it is: The worst thing to happen to computing since… well, … since ever!

      2002 Report on Integrating iMacs into a Windows-Dominated Desktop EnvironmentBefore my recommendation is distorted beyond recognition, let me make clear that I don’t advocate ripping out all the Windows desktops in your company and replacing them with Macs. Although that’s an end-point that here, today seems like a worthy goal, it would be too disruptive to force users to switch, and you’d just end up with the kind of resentment that the Macintosh purges left behind as the 1990’s ended. Instead, I’ve always recommended a sane, transitional approach, such as this one from my November 2002 paper on the subject (note that names have been changed to protect the guilty):

      Allow employees to choose a Macintosh for desktop computing at NNN. This option is particularly important for employees who come to NNN from an environment where Macintoshes are currently supported, as they typically are in academia. In an ideal environment, DITS would offer Macintoshes (I would recommend the flat-panel iMacs) as one of the options for desktop support at NNN. These users can perform all necessary functions for working at NNN without a Windows PC.

      This approach simply opens the door to allow employees who want to use Macs to do so without feeling like pariah or second-class citizens.

      As long ago as 2002, Mac OS X was able to navigate a Windows network with ease, and assuming your company already has a Citrix server in place, Mac users can access your legacy Windows client-server apps just as well as Windows clients can. This strategy will gradually lower security costs—and probably support costs as well—as the ratio of Windows PCs to Macs in your organization goes down, while lowering the risk of successful malware attacks. As a side benefit, I would expect this strategy to improve user satisfaction as well. Since the cost of Apple desktops today is roughly the same as big-brand PCs like Dell, the ongoing operational cost of buying new and replacement machines wouldn’t take a hit, as the IT mythmakers would have you believe. In fact, did you know that all new Apple computers come with built-in support for grid computing? Certainly! Flick a switch, and your organization can tap into all the Mac desktops you own to supplement the company’s gross computing power. What’s not to like? (My 2002 report didn’t cover grid computing — it was a new feature in Mac OS X 10.4 last year — but it did address all the issues, pros, and cons an organization would face in integrating Macs with PCs; however, it’s too large a subject to discuss further here.)

      But how do you convince IT managers of this, when operating systems from Microsoft are the only kind they’ve ever known? I certainly had no luck with mine. Heck, I didn’t even gain an audience to discuss it, and my fellow mid-level IT managers were aghast that I had even broached the subject. After all, many of them were still smarting from the bruising—but successful—war against Mac users they had waged during 1994-96. The fact that in the meantime Apple had completely rewritten its operating system, abandoning the largely proprietary one it built for the original Macintosh and building a new, much more powerful one on top of the secure and open foundation of Unix made no difference to these folks whatsoever. It’s not that they disagreed with any of the points I was trying to make… they didn’t even want to hear the points in the first place!

      A new approach for IT managers

      Hear No EvilFor the most part, the managers who, like “hear no evil” chimps, muffled their ears back in 2002 were in charge of IT operations. To them, change itself is evil, and the thought of changing your decision of 5 years ago for any reason was simply unthinkable. And yet… consider how much the computer landscape changes in a single year nowadays, let alone in 5 years. Individuals with good technical skills for operations management but no tolerance for change should simply not be allowed to participate in decisions that require objective analysis of the alternatives to current practice. And at the pace of change in today’s technology market, inquiry into alternatives needs to become an embedded component of IT management.

      For what it’s worth, here are a few principles from the Martian Code of Conduct for IT management:

      1. Make decisions, and make them quickly.
      2. Decisions should always consider your escape route in case you make a bad choice
      3. Escape routes should enable quick recovery with as little disruption to users as possible
      4. Open source options should always be considered along with commercial ones.
      5. COTS doesn’t stand for “Choose Only The Software” Microsoft makes.
      6. Sometimes it’s better to build than to buy. Sometimes it’s better to buy than to build. A wise IT manager knows the difference.
      7. Reevaluate your decisions every year, to determine if improvements can be made.
      8. Don’t cling to past decisions just because they were yours.
      9. Never lock yourself in to one vendor’s solution. Always have an escape route. (Wait… I said that already, didn’t I?)
      10. Know thy enemy. Or at least know thy vendor’s enemy.
      11. Be prepared to throw out facts you’ve learned if new information proves them wrong.
      12. IT is a service function, not a police function. Remember that the purpose of the IT group is to skillfully deploy the power of information technology to improve productivity, communictions, and information management at your organization.
      13. Never let contractors make strategic IT decisions for your company.
      14. Never take the recommendation of a contractor who stands to gain if you do. (In other fields, this is called “conflict of interest.” In some IT shops I know, it’s called “standard practice.”)
      15. Don’t be afraid to consider new products and services. When you reject a technology or tool a customer inquires about, be sure you understand why, and be prepared to explain the pros and cons of that particular technology or tool in language the customer will understand.
      16. Make sure your IT organization has components to manage the following two primary activities on an ongoing basis, each of which has its requirements at the table when you compile budget requests for a given year:
        • Application developers capable of handling a multitude of RAD tasks. This group should maintain an up-to-date laboratory where new technology and tools can be evaluated quickly.
        • Operations group with subcomponents for dealing with networking, telecommunications, desktop management, security, data, and application/server maintenance.
      17. Always obtain independent estimates of whatever resource requirements the operations group tells you are needed to make significant changes in technology platforms at your organization, because an operations manager will always exaggerate the true costs.
      18. The success of your organization is measured not by the size of the desktop support group’s Help Desk, but rather by continued progress in reducing the number of requests and complaints that are referred to the Help Desk. A rise in Help Desk requests over time is a symptom that something is probably wrong—not a signal to ask for a larger Help Desk budget.
      19. Similarly, the percentage of a company’s budget that gets devoted to IT should become smaller over time if the IT group is successfully discharging its mission. Calls for larger IT budgets should be viewed skeptically by the COO, since it often symptomizes an IT group that is unable or unwilling to find better alternatives to current practice.

      From the perspective of an IT manager who has never worked with anything but Windows desktops, the prospect of having to welcome Macintosh or Linux systems into your Windows-only network must be a frightening one indeed. If you know absolutely nothing about Mac OS X and your only experience with a Mac was a brief hour or two with OS 7 a decade ago, your brain will very likely shut down at such a thought, and your hands will plant themselves on your ears if a colleague begins speaking in that direction. This is entirely understandable, and it’s equally understandable that the vast majority of your existing Windows users will want to remain on the only computing platform they’ve ever known.

      But don’t you see that this fear doesn’t mean a decision to support Mac OS X in your organization is wrong! Such fears should certainly be considered in a transition plan, but they shouldn’t be considered as a reason to oppose development of a transition plan. Fears like these, and the sometimes irrational attitudes they bring to bear in technology decisionmaking, is why we desperately need new blood in the Nation’s IT departments, and why applicants to the job whose only (or only recent) training has been in MCSE shops should be filtered out from the get-go. You often hear Macintosh users “accused” of being cultish, but from my perspective, steadfast Microsoft Windows partisans are much more likely to meet the following definition of “cultish” than the Mac users I’ve known:

      A misplaced or excessive admiration for a particular person or thing.

      By fostering the myths about malware threats, the cult of Microsoft has already poisoned the computing experience for millions of people and wasted billions of dollars trying to shore up the bad past decisions of its Microsoft-trained hordes.

      It’s time to give some new ideas a shot. It’s time to begin a migration off of the Microsoft Windows platform in U.S. corporate and government offices. Only once we dismantle the Microsoft computing monoculture will we begin to beat back the malware plague. Until then, IT security will simply spin its wheels, implement security policies that punish the whole software development life cycle because of Microsoft’s sins, Back To Topand require Mac OS X users to take online security training that simply teaches all the things we have to fear from using Windows computers.

      Addendum: A few articles for further reading:
    • Macs And Viruses. Fact vs. FUD, Mac360, May 2006
    • Melissa and Monoculture, Gerry McGovern, April 1999
    • Cyberinsecurity, CCIA, September 2003
    • Beware the Microsoft Monoculture, CNET, May 2006
    • Fears Over New Mac OS X Trojan Unfounded, Ars Technica, February 2006.
    • Network Managers Flee IE, trimMail, January 2006
    • A Crawler Based Study of Spyware on the Web, University of Washington, February 2006
    • Mad As Hell, Switching to Mac, Winn Schwartau in NetworkWorld, May 2005.
    • Colophon

      This article is the first time I’ve used a new, very useful JavaScript called Image Caption from the Arc90 lab site. Image Caption makes it easy to include text captions with the graphics you publish to illustrate your text. It includes a small JavaScript file and some sample CSS code. To implement, you simply add a class attribute to the images you want to caption, add the caption text as a “title” attribute, and include the script in the head of your HTML code.

      I also had fun using the terrific JavaScript called simply Reflection.js. It’s recently shed about 30kb of file size and is down to only about 5kb, works great alongside Prototype/Script.aculo.us, and is childishly simple to execute. Besides adding a link to the JavaScript file, you add a class attribute to the images you want to reflect. For each reflection, you can tweak the reflection height and its opacity by adding specific measures in two additional class attributes. Unlike other reflection scripts I’ve tried, this one automatically reflows the text once the reflected image is added to the layout.

      • del.icio.us
      • Google
      • Slashdot
      • Technorati
      • blogmarks
      • Tumblr
      • Digg
      • Facebook
      • Mixx
      April 12th, 2006

      Web-Based Collaborative Editing: Twiki, Tiddly, or TikiWiki?

      Wiki ExplosionI spent a few weeks in December 2005 investigating the universe of wiki software, and confirmed what I already suspected: It’s a very big universe with many wikis! It would be impossible to explore them all, so I first tried to come up with a short list of wiki engines to focus on. Fortunately, there are a number of excellent sites that attempt to provide matrices of wiki software functions and abilities. Here are a few I used and recommend:

      After studying these various resources, I was able to narrow the list of wikis down to the following:

      MediaWiki was the default choice, since I assumed it was probably the best of the lot, given its starring role in powering Wikipedia and just about every other high-profile wiki you encounter on the web. After a painless default installation of MediaWiki, I had the usual MediaWiki shell and did a few quick walk-throughs of the structure just to make sure all the plumbing was in place. It seemed to be, so I proceeded to install a few of the others from my short list.

      In fairly quick succession, I installed Dokuwiki, PMwiki, and Tikiwiki, reviewed their documentation and capabilities, and did some basic configurations. They all seemed to be reasonably good, but none was noticeably superior, at first glance, to my initial configuration of MediaWiki. It seemed to make sense to stick with MediaWiki, given its large market share and equally large mind-share.

      So, over a period of about 2 days, I began trying to configure MediaWiki to do some things beyond its default behavior–things I knew would be needed to provide a useful wiki for my target, non-technical clientele.

      What a mess! I had spent 2 solid days without accomplishing much of anything toward setting up the desired wiki, which by the way was intended for use by a Federal organization that was interested in testing the use of wikis for developing and maintaining standard operating procedures for its divisions and branches.

      Here is a summary of the problems I encountered with MediaWiki:

      1. Basic help on structured wiki markup was not available from within the software. In fact, no help files were loaded by default. Users are expected to create their own help pages.
      2. Basic help on structured wiki markup was not available from within the software. In fact, no help files were loaded by default. Users are expected to create their own help pages.
      3. The software’s documentation is terrible. The main problem is that there are so many sources of information, you get conflicting instructions. Many of the conflicts have to do with the various versions of mediawiki (1.3, 1.4, 1.5, etc).
      4. Creating simple navigation is quite difficult. One approach to navigation is to use “sub-pages,” but then forming links is tricky, and the page names include their parents by default. In other words, the relationships are discovered strictly by naming. Using piping, it’s possible to make the link text look OK, but the titles on the pages are another issue.
      5. MediaWiki includes no basic, web-based administration tools at all. In fact, there’s no detection of sysadmin capability at all in the interface. To change the links in the Navigation box, for example, it turns out (after hours of hunting) that you are supposed to change the text in a page called Special:Allmessages. Not exactly intuitive, and it’s set up by default so as to be editable by anyone.
      6. Another useful navigation feature–breadcrumbs–don’t exist, and they can’t be created without custom coding. (There’s an extension for this, but it only works in an older version of MediaWiki.)
      7. Skinning is also very difficult compared with the other wiki software I had looked at.
      8. A basic requirement for this project that I understood was not natively wiki-like was the need for some basic authentication and the ability to write-protect certain parts of the wiki tree for different groups. MediaWiki has a plugin for authentication, but it turns out that anyone who has administrator privileges can edit any part of the tree, and that wasn’t going to be sufficient in my security-conscious Federal agency.

      After this experience, I decided to return to the drawing board, and take a second look at the short list packages. I also added a new one: Twiki. It’s written in Perl and uses flat files, but appears to be much more “mature” than some of the others.

      In general, my impression after working with these various software packages is that wiki software is not nearly as “mature” as blog software. I was looking for an open-source wiki that would be as powerful as WordPress is in the blog world, while also being as easy to design, configure and administer as WordPress.

      Twiki wasn’t much better, and neither was MoinMoin, which I also ended up checking out (even though MoinMoin is written in Python, and I had no Python programmers to call on). Despite much positive press, MoinMoin has the same deficiencies as other wiki software. And what are those?

      Basically, wikis were developed for use by programmers as a way of sharing information on software projects. They developed around a culture of highly sophisticated hacker-types who didn’t need a lot of hand-holding when it came to navigation. The main concern was to allow rapid development of pages on a new topic, with automatic links to pages that hadn’t yet been written (but which needed to be written). Wikis were designed to grow organically, as one writer filled in the blanks in another’s page by adding information to it through hyperlinks, or as multiple writers contributed to fleshing out the details on a particular topic. In both cases, the result was to produce a decentralized information resource that relied primarily on search for finding things.

      On Wikipedia today, it’s become clear to those “in charge” that strong editorial oversight is needed to keep a wiki useful. For one thing, wikis don’t automatically understand synonymous terms. One person may write a page that has a link to a new page called “WikiSystems”, and another may already have filled in a page called “WikiSoftware.” Unless someone were watching “from above,” you could end up with two pages that covered pretty much the same ground.

      Also, notice the terms “WikiSystems” and “WikiSoftware.” In wikis, the default way of linking is to write new pages in what is known as “camel case:” Two words “munged” together, each having an initial cap. Wiki software is designed to recognize camel-cased terms and to automatically hyperlink them. Again, this is useful in its original conception, but it’s not particularly intuitive for a nontechnical user base such as you would find in most business or government organizations.

      Another shortcoming that many wikis don’t handle well is authentication. Most wikis are designed to allow content editing by anyone. Most also allow administrators to restrict editing to registered users only. However, the ability to restrict access to certain pages to only certain people is not a native ability in most wiki systems.

      Pasted GraphicBefore I get around to describing the software I ultimately selected, I want to include my impressions of a few commercial software packages that have developed in the last year in an attempt to feed the growing market for wikis in corporate Intranets. One of the most well-known is Jotspot, an outsourced wiki system that can be purchased for a monthly fee. Jotspot is probably the most advanced wiki of this type, although since December there have been a fairly large number of newer entrants to the field, and it’s possible that Jotspot has some good competitors by now. Jotspot is actually more of a full-blown Intranet than a wiki. Indeed, it shares this characteristic with Twiki, which branches out way beyond the central wiki functionality. Besides being a wiki, Jotspot (and Twiki) comes with a large number of plug-in applications that can be used for various Intranet functions (e.g., Project Management, Bug Reporting, Company Directory, Knowledge Base, Call Log Management, Blogging, Group Calendaring, Meeting Management, Polls and surveys, Personal to-do lists, etc.) The hosted version has a reasonable price tag, maxing out at $199 a month for unlimited users.

      Jotspot also has an enterprise version for companies that want to host the software themselves. I set up a test wiki at Jotspot, and although it definitely has a lot to offer, it also isn’t nearly as configurable as one of the open-source packages. In addition, I felt certain I could find a perfectly good wiki package for my target organization without investing a lot of money.

      Another impressive, hosted wiki-like system is Backpack, and I also set up a test there. However, Backpack is designed to work best as a personal wiki, rather than for collaboration. The same company also makes a web application called Basecamp that looks like an ideal solution for project management uses, but is not designed for documentation or knowledge management–the two main uses that this pilot wiki would be put to.

      And if anyone was interested in a personal wiki, I don’t think you could do much better than Tiddlywiki, an amazing, rich-web interface “wiki on a stick” that literally packs all of its information into a single portable file. It works an amazing amount of magic that could possibly be useful collaboratively, but that is designed to work best for individuals.

      Finally, I looked at Projectforum, a commercial package that the customer was interested in. It turns out that Projectforum is not a wiki system, actually. Rather, it’s a discussion forum package (there are hundreds–possibly thousands–of such packages) that is trying to leverage the buzz around the term “wiki” and RSS.

      The critical difference is that a wiki is primarily a content management system, not a system for user discussions. MediaWiki uses the term “collaborative editing,” because wikis typically have built-in discussion forums for each piece of content that gets added to the wiki. For example, if I post a Standard Operating Procedure on designing a website, readers would have the ability to create a discussion about that SOP. Also very important is the ability for users to interlink content into a growing content tree, producing in the end a very useful knowledge-base of information on a given topic.

      Projectforum doesn’t have those features, and is missing other standard wiki features as well. As its name implies, Projectforum is actually designed for project management rather than content/document management, and it excels at the collaborative discussion part of project management. In that sense, it is similar to Basecamp.

      So after this market review, I had almost concluded that no wiki was really yet up to the challenge I was hoping to put it to, when I decided to try a relatively new, little known package called Wiclear. After reading through the website documentation, I tried to quell my growing excitement, because on paper at least, Wiclear was designed to overcome all of the shortcomings that were so obvious in all the wikis I’d tried.

      Developed by a French programmer and modeled after a French blog system called Dotclear, Wiclear shares with nearly all other wikis the virtue of being open-source. Meaning, I can freely download the source code and install it. Wiclear is written using PHP, an increasingly popular web programming language, and the open source database MySQL. Since I happen to have some expertise in both, I felt comfortable with the prospect of possibly having to tweak the system to my requirements.

      Indeed, after only 3 hours of work, I was able to configure Wiclear with all the basic requirements:

      • Apply a customized style sheet
      • Customize the section navigation
      • Customize the page elements
      • Customize the heading
      • Set up test users
      • Enter test content
      • Set up appropriate help documentation for a wiki-nubi.

      Compared with my experience with the other wiki software–in particular, with MediaWiki, Wiclear was very easy to work with. Furthermore, Wiclear had the following required features, some–but not all–of which were available in one or more of the other wiki systems.

      • Browser-driven installation
      • Web administration interface
      • Easy templating
      • Hierarchical page structure enforcing parent-child relationships between pages
      • Individual page access controls through use of industry-standard ACL’s (access control lists); the system provides an easy web-based interface for setting per-page permissions
      • An automatically generated “site plan”–site map–for navigation
      • Automatically generated “breadcrumbs”
      • Automatically generated “sub-page navigation” (showing all child pages to the current one)
      • Registered users can add comments about any page, whether they are the author or not. (This feature is configurable and is in fact a standard feature of most wiki systems.)
      • Users can attach external files to individual pages (a relatively rare wiki feature, but one that I was sure would be “oohed and aahed” at by my customer base.
      • Enables user self-registration, and provides flexible User/group management tools.
      • Provides a “Post New Content” feature that’s unique in wiki’s, but extremely useful for adding new content to the tree.
      • Usual features that made wikis so popular for collaborative editing in the first place:
        • Page history
        • Comparisons with and rollback to earlier pages
        • Subscriptions by email
        • RSS feeds
        • List of recently changed pages
        • Search
        • “What links here” feature
        • Simple editing system for easy content entry (with optional HTML entry), as well as an optional preview capability

      Further, if my customers were ever to require the ability to support multiple languages, they could turn on one of Wiclear’s most impressive features: built-in multilingual support.

      Wiclear has a clear, well documented code base, and with my knowledge of PHP and MySQL–plus HTML, CSS, and JavaScript–I was quickly able to add a few custom features that I thought my customers would appreciate. The first was a simple WYSIWYG HTML editor that would give our writers the comfort of having Word-like editing tools in place. For this, I chose Dojo’s excellent DHTML, rich-text editor, which is one of the few that supports Safari on the Mac as well as all the other usual suspects (Mozilla/Firefox and IE). The Dojo editor is a breeze to set up, and works beautifully. It doesn’t “do tables,” but my pitch to users is to keep the text structure simple, so hopefully nothing more complicated than headings and nested lists will be needed.

      The second tweak that might be of interest to readers was a default setting to automatically subscribe an author to the page he/she has written. This ensures that anyone who authors a page gets notified whenever it has been changed. (You cannot opt out of this feature, but you can always unsubscribe.) I hope this will take care of the worry over unauthorized edits, since it will be hard to not know when “your” page has changed, and quite easy to go in and fix any errors.

      The author of Wiclear has steadily continued to improve the product. There have been 3 new releases since I installed Wiclear in late November 2005. In fact, the author has incorporated at least one of the features I requested after my initial configuration–namely, the ability to define a “root” page that could be ACL-protected against accidental damage. This was kind of important to give my customers the necessary comfort level to know that their part of the tree wouldn’t be uprooted someday, either advertently or inadvertently. :-) I actually hand-coded the hack into Wiclear at the time, but the software’s author had finished integrating that function by January.

      So far, I’m very pleased with my choice, and still relieved that I didn’t have to back out of the idea of testing the wiki waters for collaborative editing. Next comes the more difficult part–convincing users that this is a tool that can work for them rather than simply another complication to their working lives. Fortunately, there are several forward-thinking groups in the agency that are anxious to try the wiki out. I was delighted to set up the first group with their own branch of the wiki tree, and look forward to getting their feedback.

      In a dumbed-down form appropriate for non-geeks, Wikis have great potential to be a key knowledge-management solution for a lot of content management problems in an organization. I think with Wiclear I’ve set up a foundation that won’t scare people away without even giving it a try, and that, in my organization, would be called a victory!

      • del.icio.us
      • Google
      • Slashdot
      • Technorati
      • blogmarks
      • Tumblr
      • Digg
      • Facebook
      • Mixx
      Just Say No To Flash