Do we need a new book on scholarly editing?
Following up on my first blog entry
about the possibility of producing a 4th revised edition of Scholarly
Editing in the Computer Age, I think we are at a point in the
development of electronic scholarly editions to justify the idea.
SECA was addressed to serious scholars who saw a need to edit
a text but who had no training in scholarly editing. Some of the
best textual critics (and some of the worst) come from the ranks of
those untrained in bibliography and textual criticism but whose
scholarly research rendered them unhappy with the existing editions
of works they have investigated.
Following the publication of the
first three editions of SECA, I wrote (in part by gathering up
scattered essays, revising them and adding new chapters) Resisting
Texts: Authority and Submission in Constructions of Meaning (1997),
this time trying my best to answer my own remaining questions about
the nature of literary texts and the consequences of a range of
editorial strategies that could be adopted for scholarly editing.
The best compliments I have ever gotten about my writings have been
about this book from scholarly editors whose works I have respected.
I have also gotten dismissive remarks from folks who think it is too
hard to follow, especially its key chapter, "Text as Concept,
Matter, and Action". But I was wrong about that book being the
place to work out my final thoughts on scholarly editing.
Following
developments in electronic scholarly editing, and in particular going
to work as a colleague of Peter Robinson at De Montfort University,
revitalized my attention to the potentials for electronic editing.
However, it does not take a rocket scientist to see very quickly what
was wrong with all the electronic edition prototypes that had
developed by 2005: they were either developed by serious textual
critics with poor technical support (ugly but useful) or by ersatz
textual critics with wonderful technical support (pretty but
amateurish scholarship). Or they were just still stuck in the mire
of decisions that had been made in the 1990s: stuck with HTML or
XML's hierarchies that prevented a swathe of things scholarly editors
want to do; committed to transcriptions because images were too slow;
and mired also in the inflexibility of the social editorial
principles espoused primarily by Jerry McGann who was, without any
doubt, the most influential textual critic of the last twenty years.
So,
I wrote From Gutenberg to Google: Electronic
Representations of Literary Texts (2006),
trying my best to explore what was meant by representing in a new
medium the complexities of an old medium. Driving my thoughts were
a respect for the illusive past--the history of particular literary
works in the print era--and a deep curiosity about what could be
accomplished in new digital media. What I mainly discovered is how
much I did not know in areas where I thought I had known enough. The
book is, in many ways, the work of an amateur in fields related to
textual criticism. I despair of ever knowing enough to write a
professional book on the subject. A number of people have been kind
enough to say that they have learned things from me in that book, but
I fear I have "shared ignorance" with them, as well. Much
of From G2G looked to
the future, focusing on what was wrong with the first two or three
generations of electronic scholarly editing and imagining what would
be better. Six years later, having spent a lot of time with
computing professionals at Loyola University Chicago, working on HRIT
(Humanities Research Infrastructure and Tools), it might be time for
a new book rather than a reprise of SECA.
The research side of textual criticism has not changed; the
delivery side and the tools for developing that side are still at
unsatisfactory development stages.