[PREVIOUS] | [NEXT] | [FIRST]

V. Strategy and Process

In this section, I first consider who should collaborate in this evolutionary process. Then I consider how (a) human practices and (b) technologies may migrate.

V.A. Who collaborates?

First, of course the scholars must be engaged. Faculty from diverse disciplines must be willing to participate in an evolution of their academic habits. On the other hand, in order to fit into the life of a researcher, the typical collaboration must demand very little time. (One hour/week for 9 months is a lot to commit, for example.) In addition, every project needs at least some fraction of:

* an RA familiar with the subject, or TA, if there's an associated course
* a designer -- train to use shrink-wrap applications, liaison with developers
* a system administrator[37]
* a programmer to work on custom interfaces/some custom functions.

Except for the RA, these fractions may be aggregated among several concurrent projects to whole positions for efficiency.

Why should the university maintain a pool of mediator-designer-authors? For the foreseeable future, there will be a hinterland between where technology ends and where human needs[38] begin. As user-knowledge and technology grow, this hinterland between the known and unknown, between the assimilated and the unnatural, will change. If the goal is to make technology as transparent or weightless as possible, refashioning it to human needs will require expert mediator-designer-authors. Until artificial, intelligent systems reliably and benevolently refashion themselves to meet our desires, such experts will be human. Particular communities do best with sympathetic technical experts who understand the communities from the inside. It may be wise to put several feelers out into the future to inform the evolution of current scholarly practice.[39][40](See V.B)

Some have remarked that, historically, certain faculty have enjoyed much greater access to academic software expertise than others. This is an unavoidable consequence of the miniscule size of the resource compared to the size of the professorate. These academically-oriented software design stances and expertises are either non-existent in the industry or largely beyond the pocketbook of most academic authors.

Some have likened the new media composition teams (researchers, programmers, artists, writers, and editor-managers) to the associations of 16th century printers and scholars which evolved into modern publishing houses. Our situation is somewhat different, however, because university resources are not as elastic as the proto-capitalist economy of Europe was in the 16th century. In any case, it is true that the notion of authorship is being stretched by the integration of new media. Indeed, the emerging richness of protean, interactive, scripted interfaces makes them more and more a form of literature in their own right, so that the traditional notion of an author who "simply" pours his/her words into a container like a book or a word-processor file is dissolving.

It's not clear what new social forms will arise around these new modes of creating interactive media, but some have looked to activities ranging from story-telling to documentary film and theater. We have only begin to mine the crafts of musical composition and performance for guidance with media composition in digital contexts.

V.B. Migrating people and practices

Let us over-simplify by casting people as "users": viewers/readers -- people who handle information and "developers": authors/designers/programmers -- people who script behavior. We need to migrate practices by people in all these roles.

We should migrate users and their habits implicitly by running progressively more network services on the more powerful systems, underneath familiar interfaces (eg. Macs). This requires interoperability across operating systems, which the Mac OS does not provide, yet.[41] It is essential to not paint ourselves into a corner. We need to encourage users to:

* use structured data as much as possible,
* avoid monolithic applications which do not communicate with other applications,
* prefer network solutions -- use network mail services, AFS data management,[42]
* share network tools.[43]

The Design/Learning spiral mentioned in III.E continues beyond the stage where users tell designers what they want. In the most fruitful collaborations, a stimulating two-way conversation can occur between faculty and technologists. Faculty enter with desires which are best expressed in native terms, unfettered by any references to specific technology. Technologists modify software systems to realize some of the desires. Wise technologists can point out the best means -- writing innovative software, using shrink-wrap applications in novel ways, changing workflow, doing it the old way. Enlightened technologists will first consider if technology has any role to play at all. Experience shows that almost all systems must be altered to meet the habits of the humans, though in practice, since technology is so rigid and people are so adaptable, hapless humans often contort their habits to work around the constraints of technology.

After some experience, faculty gradually realize other uses of the customized software and even rethink their research practice. Not a few ASD projects which started out as "courseware" became useful in the creation of new professional practice and new knowledge.[44] This experience, coupled with more general ideas about the utility of software, underlies principle III.C.

More generally, in this two-way collaboration, faculty learn the terrain of technology and discover what is easy and what is difficult to realize given the state of the art. Projects such as the Center for Teaching and Learning or the Curriculum Development Laboratory serve to educate authors about the mechanics of using particular shrink-wrapped solutions. What we're addressing here is a more subtle form of knowledge transfer which comes with a more intimate manipulation of the cybernetic substance called software.[45]

How will we migrate developer's practices? As the envelope of technology grows, the nature of innovative technologists' work will change as well. Machine-level programming[46] may gradually be covered by more expressive scripting, perhaps not as quickly as computer scientists would like. The line between "writing" and "programming" may grow ever fainter. For example, mathematical literature is undergoing a sea-change and is evolving from printed descriptions about mathematical objects to computable representations of mathematical objects an sich. Similarly, we see the evolution of ever more expressive artificial languages in particular domains such as animation and theater.[47] In all domains, of course, we intend augmentation, not replacement of human functions.

Composing media for re-use by other developer-authors has been addressed in what were two very different contexts: scholarly writing and computer systems engineering. It's not clear how the fusion of "literary" forms will change compositional practices in these domains, but I expect some cross-overs or borrowings to occur, not all of which will be beneficial.[48] In any case, there is always a tension between "writing" for others to read and "writing" for personal edification.[49]

The principal advantages of the WWW lie not in its explicit application as a hypertext delivery system. In fact, flooding the net with HTML documents is like boosting the gain on a noisy amplifier; we now have an information network made un-navigable by the density of flotsam. And because of a shortage of WYSIWYW (What You See Is What You Want) multimedia editors[50], novice hypermedia authors have been reduced to spending an inordinate amount of their time writing with very blunt instruments. Having humans transcribe URL's and type in user-interface scripts is analogous to having graphics artists paint by typing in RGB values pixel by pixel. In this context, WWW's contingent benefits are that (a) people will be forced to devise alternatives to hypermedia as they hit the limits of the graph metaphor of nodes and links; and (b) people now have a powerful incentive to put a lot of data into digital form. We can only hope that it will be possible, post-factum, to decouple structural tags from layout and from performance specifications in a simple way.

In any case, the complexity, or put more positively, the richness of an author-developer's writing in these more expressive media should grow. These are partial answers to Steve Boxer's question about what need is there for "custom programming."

V.C. Migrating systems

It is illuminating to read Paul David's path dependence analysis of how industries get into sub-optimal ruts even with access to technological innovation.[51] We can now adopt systems which deliver a high base-level capability: eg. thoroughly integrated object-oriented multimedia, network, component/layered/multiple-hardware architecture (none of which is delivered by Apple or PC OS' by the way) -- on top of which we can develop even richer environments. For this reason, we need to carefully distinguish between unguided "follow-your-nose" enhancements vs. a slightly more informed growth.

We run the danger of painting ourselves into an expensive corner by merely taking the locally easy-way out. Forever making uncorrelated incremental changes to delivery environments will in the long run cause tremendous costs in maintenance and consulting. To take one case: SULAIR (among other campus organizations) is trying to network Macs together not only to deliver text documents, but also to manage work-groups, deliver multimedia, catalog and manage site-licensed software, etc. These tasks are really beyond what the Mac OS originally was designed to do. Here SULAIR is faced with spending a significant amount of money on consultants and system administration (out of sync with the UNIX/Distributed Computing system) to patch up what the Mac OS simply does not support very well. Distributed applications always launch perceptibly faster on Ethernet/UNIX boxes; this is not a matter of raw CPU speed: the very same chip (RISC or plain 68x) running different OS will support multimedia transport better if it's running X, NeXTSTEP or presumably Taligent, than if it has to use Mac OS or PC Novell. In this case, it's pretty clear that integrating Macs into DSG's administrative system is far more efficient and rational than trying to jury-rig Apple or PC-specific solutions.[52]

I claim that we actually save money, and obtain a far better quality of life for both users and developers if we reserve a portion of the budget for next-tier, next-generation practices, plus a migration path which preserves intellectual content. Note that this is quite independent from upgrading particular pieces of software or hardware. It's the intellectual content, and to an appropriate extent the scholarly practices (See II) that I believe we should preserve. In addition to incrementally supporting current shrink-wrap tools, we should also invest in paths over which users as well as developers may migrate to more integrated scholarly environments. (ArtAccess and MediaWeaver illustrate this dual strategy.) Rather than pouring all money into increasingly obsolete infrastructure like Mac or PC OS, it would be wise to spread the bets a bit by seeding an upgrade path (in core software environments as well as in user habits). It would pay rich dividends to have a few progressive projects to show other ways to march, rather than wave everyone on as they march merrily into the ocean.

For example, scholars can always look up a reference -- but it should not matter that (s)he is doing it on a PC or a SGI, on a black or a white box. DSG/Networking have already worked out this sort of location-independent computing for NS/X Windows/UNIX systems, and but this is not implemented on Macs.[53]

The same horsepower chip running a PowerMac could do wonders supporting an operating system which already comes with multimedia objects in place (See IV.B) -- obviating need to cobble together 7-10 pieces of different site-licensed commercial software to deliver a poor approximation of the same multimedia service. Such solutions are costly because:

* You must pay for each item: Adobe Premiere for QuickTime, Canvas for graphics, Illustrator for Postscript, Debabelizer for a few graphics.
* Solutions are always partial: each application deals only with its own special types of data.
* Mac/PC applications are tied to particular OS/hardware (unlike NS or, soon, Taligent)

How might we migrate systems (hardware, software, user interface paradigms)? The migration starts with the University's installed mix of systems. For example, the MediaWeaver is built so that faculty and students can start doing some of this richer work now, rather than wait for a complete system to be built years from now. Modularization allows developers to port components while maintaining service. Then we should set up a few richer systems based on next-tier platforms. For example, the MediaWeaver provides a software framework which can evolve toward richer environments. We need a quorum of next-tier systems in a common space -- a lab -- where we can try out group use. This lab, ideally, should be in a public space like Meyer in the thick of users but also close to development groups like ASD. Participatory design principles suggest that we should place some rich next-tier systems in the desktops and homes of "early adopter" faculty and their students early on. I will discuss what some of these next-tier systems might be in section VI.

Of course, one of the most fundamental issues is how to migrate data, or more accurately in the context of this note, how to preserve intellectual content. For example, by keeping data already in AFS and Sybase, MediaWeaver makes migrating knowledge structures less painful than with more ad hoc multimedia systems. In general, we should use rich, standard data structures (see V.E above) We also need tools and staff to systematically deal with migration. There's a tradeoff between hiring less skilled assistants to manually parse/recast information, and engaging expert staff to acquire/make information-conversion tools. SULAIR seems like a natural provider of such services. (The Academic Text Service is a special case of this category.) For efficiency, digitization should be centralized somewhere in the SULAIR, not relegated to haphazard departmental efforts. On the other hand, this model must accommodate faculty's own production tools and informal collection habits.[54]


[37] It is not clear that monolithic price mechanisms will work to efficiently distribute resources to the departments, but this issue obviously needs to be taken up at more general level. Cf. discussion of pricing access to network resources (J. MacKie-Mason) and information products (H. Varian), and critiques of such approaches.

[38] I leave aside human needs which are artifacts of limited technology-- eg. fonts, formatting.

[39] I make no millenialist claim that technology evolves uni-dimensionally toward some unique Technology. My image is more that of our scholarly community as an amoeba which could benefit from putting out a few pseudopods for information and locomotion rather than jiggle in a Brownian soup of market forces and, worse still, bureaucratic contingency.

[40] Susan Leigh Star presents some enlightening observations about infrastructure, which she argues should be viewed as a relation rather than a fixed layer of technology. (S.L. Star & K. Ruhleder 1995,Steps toward an ecology of infrastructure: Borderlands of design and access for large information spaces, p. 5.)

[41] That's why the MMDD provides its own front end kits rather than binding to a fixed front end application. For example, inserting an X-window into the Mac desktop would introduce so many additional constraints that the design process would be seriously distorted by workarounds. CORBA addresses this need at a formal level. To date, of the CORBA-partners, only NS has implemented a cross-architecture distributed object framework.

[42] This may make frightening demands upon Networking resources. But this seems to be less expensive in the long run than un-correlated management of shared data characteristic of monadic computing.

[43] This is second nature to X Windows and NeXTSTEP users, but foreign to Mac/PC users because personal computer OS's were not designed with such applications in mind.

[44] Examples being the Argus lighting program and the MathObjects project which evolved into mathematics packages supporting research in fields ranging from demography, and radiology to geophysics and physics.

[45] I make no distinction between "data" and "code."

[46] I will lump all procedural languages including C, C++ into this category.

[47] There can certainly be intellectual merit in scripting synthetic organisms (believable agents), just as there can be intellectual merit in proving a theorem. But I do not claim that performance arts, being tied fundamentally to a human subject, can or should be replaced by any of the technologies under discussion.

[48] Hypermedia is one example of how humanist authors tried to adapt a techno-scientific metaphor and composition methodology -- the graph -- to their own writing, with mixed results.

[49] In this context, "writing" could include ephemeral activities like sketching on a chalkboard or playing on a MIDI-keyboard.

[50] Examples do exist: Intermedia, eText, some MMDD applications, MediaStation, CraftMan. Despite its scripting language. Director -- like Premiere -- is used essentially for simple playback, and is not integrated into a seamless workspace.

[51] Paul A. David, Path Dependence: Putting the Past into the Future of Economics, Stanford University Dept. of Economics Technical Report 533 (November 1988). See also SciAm article ca 1992.

[52] This work is being done by parts of the Data Center, but it is not clear to what extent the solutions are being designed with scholarly work in mind.

[53] The most salient reason for this is that the Macintosh and PC operating systems were never designed for networks, and the consequent network managers whcih were patched together are essentially inadequate for large-scale, intense traffic which now characterizes academic institutions.

[54] This requirement precludes systems like Xerox PARC System 33, or expensive commercial object oriented multimedia databases like Illustra.



[PREVIOUS] | [NEXT] | [FIRST]
xinwei@leland.stanford.edu - June 1995