Deeply Intertwingled

Here's another paper I wrote for my Multimedia Survey class on realizing more effect content searching on the web. Read ahead and enjoy, I've added links where they didn't exist in the paper version so you can see some of the stuff I'm talking about.

The world wide web has begun the process of integrating boundless information into our lives. The internet was created based on paradigms set forth by pioneers in the interactive field, including Vannevar Bush, Douglas Engelbart, and Ted Nelson. As the internet has grown it has permeated our cultural consciousness and shifted the way we do business, connect with family, friends and strangers, and more. As the information superhighway has grown it has also became unwieldy and it is necessary to use specialized tools and services to take advantage of the opportunities it affords. Search engines scan and map the web while countless individuals use communications tools push the borders far beyond these digital cartographers. Although it is often seen as the most democratic medium our world has ever seen, the internet still requires uncommon skill to create and then find the content available. Invisible meta data that resides in the digital world can act as spotlights that will guide the user to their destination.

Vannevar Bush’s “Memex” was the earliest consequential proposal for a system of storing and accessing data in a database style system. It was clear to him that with the vast amount of scientific research regularly conducted, important discoveries and insights would be lost in the crowd if not for a systematic method of organization. It would require tremendous community participation to guarantee that all the prescient data would truly be included. The form that this would take was laid out by Ted Nelson in his 1974 article “Computer Lib/Dream Machines”. Data would not be limited by the analog tools of Bush’s day, but would be set free by digital storage and networked access. Nelson’s dream of Xanadu was based in hypertext, text that would be analyzed in a branching method instead of sequentially. In his vision of networked data the user could access one article and leap from there to any number of related subjects (Packer 164). Instead of having to read a book through the filter of another’s analysis the raw source material would be presented in hyperlinked form, allowing the user to come to his own conclusions and through these connections weed out errors. Although Nelson’s Xanadu would be eclipsed by the world wide web, much of what what has come has been a partial fulfillment of his desires, “everything is deeply intertwingled” (164).

Hypertext, or linked content, has had tremendous success on the internet. Countless web domains, public and private, have systems of connections, searching, and organization that send the user to whatever they wish to see, within and sometimes without their borders. Wikipedia (Wikipedia) allows users to search and edit articles in a massive online encyclopedia. Links within each article send the user to hundreds of other articles and resource links lead to the source material on the rest of the web. The resources now available to the typical web surfer is unparalleled in the planet’s history, and yet it can improve. Although most websites link well within themselves, and many link well to outside sources the portal for most into the digital playground is through a search engine, enter Google.

The Google powerhouse rose to its prominent position because it allows users to search and sift the millions of options available in order to find just what they’re looking for. Website creators who follow good professional practice and embed tracking code in their websites are indexed by Google’s massive server farms and fed out to users whenever they search for keywords that match the website’s HTML. Google’s intentions are to make the internet the center of all computer use through development of productivity apps, browsers , and a recently announced Operating System. Many of these projects are aimed squarely at their heavyweight competitors, Microsoft, Yahoo and Facebook. This competitive market has led to incredible innovation while at the same time myopically focusing these competitors on defeating one another. Despite the fierce rivalries that exist in the technology industry the web search paradigm has not changed for a decade or more.

Although the amount of interlinked websites has grown exponentially the way in which we access that information has stagnated (despite Microsoft’s recent overhyped Bing launch). The process of classifying, sorting and searching data is cumbersome and confusing for creators of web content, programmers and web users. Websites are created by plain text to create content and then using HTML, as well as assorted other languages, to classify what different parts of the content are, which CSS then displays in the user’s browser. If the text is not written correctly, and meta tags are not placed in the head then web crawling search engines will not find the content. This is becoming more difficult as websites become more dynamic with new content appearing constantly. To complicate things further browser incompatibilities further gum up the web development process, thanks to Microsoft’s refusal to apply widely accepted web standards into its browsers, and perpetuate competing file formats in images, video and more.  In short, the process of creating content to be added to the linked mass of online information is frustrating and requires special skills that the average user does not have. A simple and effective method of embedding searchable data within content destined for the web will help cut through some of the technical difficulties that prevent the full realization of Bush’s and Nelson’s vision.

Metadata is invisible data that is embedded into digital files. It’s use became prominent when digital cameras became cheap enough that vast amounts of digital photos began to fill up millions of user’s hard drives. A simple method of organizing these photos became necessary. Using metadata a photographer can attach keywords to every photo, individually or by groups, so that they can later be found by searching for keywords. This form of invisible information holds promise for websites as well. Imagine a word processing program, or website input box (a Wordpress post composing window for example) where the content creator adds keywords to sections of text through a simple window or key command. This invisible information goes wherever the text goes. Instead of using plain text to create websites the web programmer can insert this smart text and automatically there is an invisible layer of data embedded within every element of the website. Metadata would be searchable and sortable using search engines. This differs from the process of tagging blog posts with certain subjects because it would be useable for the rest of the web and not simply within the website itself.  Smart text that is tagged with metadata would smooth the process of web creation because it would involve every single person involved in creating a website in the process of classifying the data they create. Through the widespread adoption of metadata power would be put back in the hands of the user to find connections in and out of the subjects that interest them. It would help users navigate the web nonlinearly as Nelson imagined. The connections would tie together “every whichway” (Packer 165).

Design History Technology