Monday, October 8, 2012

Access Baggage


Accessibility is an important aspect to ANY design, not just in the realm of the Digital Humanities.  This is a concept put forth by George Williams in his article, “Disability, Universal Design, and the Digital Humanities,” but there are some instances in his article that I take issue with.  I used to work in the hotel industry, specifically on the design and renovation side of things and am aware of the restrictions that face owners and developers when it comes to accommodating the disabled. IRL, the accommodations are quite labor intensive, raising a sink or lowering a bed, building a ramp, or swapping out fixtures that do not comply with the ADA guidelines.  The reason for extensive renovation and construction has to do with the fact that these specific accommodations did not exist prior, and cannot possibly be dealt with by the individual with the disability on his or her own (a person in a wheelchair cannot maneuver up stairs without a ramp and cannot use a restroom stall that is too small for the chair). 
In the case of disabled people in the digital world, these same limitations of the constructed environment do not apply.  If a blind person is at home and uses their computer on a regular basis, be it for scholarship or simply to listen to Pandora or shop on Amazon, the individual has equipped themselves with special hard and software that allows them to navigate a variety of virtual worlds.  They would already have talk to text technology or a refreshable Braille readout (which I had never heard of until Williams’ article).  In fact, their self-procured apparatuses would be more useful to these individuals than a “universal design” created by a site developer.  The insinuation of Williams’ article is that the current crop of digital humanist projects should keep in mind accessibility for persons with disabilities, creating programming issues and operability issues for developers and users alike when different programs and applications are introduced to the same site.  What I am pointing out are the possible (probable) glitches of developing a site and the possible (probable) redundancies of accessibility. 
If an individual already possesses an after-market text reader, why would a site need to provide one that is far inferior to the one already in the user’s possession?  It may seem like I am generalizing on the part of the disabled community, but in the case of a blind person who uses their computer workstation as much as a seeing person, like me or my colleagues, wouldn’t they be equipped with the tools that allow them to navigate the digital world just as I have? 
I may be nit picking a bit on the ADA compliance of web sites, so I will move on to another type of access issue cited by Williams.  The third reason Willams proposes for a universal Design is physical accessing, that is, how people access the web through mobile devices.  He cites some statistics on the number of people in certain demographics who access the web via mobile phones or tablets, and that design should keep this in mind.  He then goes on to cite that those more likely to use a mobile device for online access include African Americans, Hispanics, and individuals from lower-income households” (206).  The reason for bringing these statistics into the conversation is that, “if the digital humanities are to create resources accessible by a diverse array of people, then compatibility with mobile devices is necessary” (206).  OK, I see what you are doing here, but I can only feel that the statistics are inapt and reveal nothing. 
The numbers say that a large majority of Black, Brown, and poor people (I do appreciate how the first two demographics were diplomatically separated from the third, though placed in close proximity to indicate that the conflation of race and class is ever-present) access the web via mobile device.  Access it how, is my question.  Are these demographics supposed to show academic research, the kind that NINES and Old Bailey Online, being accessed?  Or merely the usual access to the web that the majority of people with mobile devices engage in, that is Facebook, Yahoo, Pandora, Wikipedia, etc.?  Are we supposed to believe, from these statistics, that those who access the web via phones or tablet could only perform research via these devices?  The implication is that forthcoming DH projects should keep in mind mobile devices so as not to ignore these demographics that have been overlooked sociologically for decades and centuries, and to account for them in the realm of the DH is of utmost importance.  I am not saying that the DH should ignore these demographics, but I am saying that invoking the demographic access to the web is specious.  Black and Brown and Poor users may access the web primarily via mobile devices more often than their white and higher socio-economic counterparts, but there is a rather large jump from common access and research. 
 What I mean here is that a person of color or of certain economic means would access the MLA bibliography, or WorldCat, or NINES in much the same way I—a white, middle-class male—would.  I do not make this generalization casually.  I simply do not think that ANYONE, regardless of race or status, would perform any rigorous academic research via a phone.  There is no sense in this practice as the interface is too small to be efficient, the memory is too limited.  It would be like a carpenter building a house with a child-size hammer.  It could be done, but makes no sense and would simply take forever.  Especially when there are larger, more appropriately-sized tools at the builder’s disposal. 
But the statistics say that the larger, more appropriate tool is not available to the specific demographic!
Really!?  Why would ANYONE be performing scholarly research?  School comes to mind, and at every institution I have been to there are slews of computers available for student use—for free.  Heck, as a middle-class, white male I only got my first laptop when I was 26.  And I wrote it off as a school expense.  If someone was performing the sort of online activity that Williams is talking about in his article, there are several very likely solutions to the issue of primary access to the web that universal design for this specific demographic  (that is, people who use mobile devices) is extraneous. 
My only purpose in this post is to point out how theory and theoreticians tend to find issues with things that common sense can address.  I like the idea of inclusive design.  I like being inclusive and taking into account the limitations and advantages of people.  What Williams’ article seemed to do, in my opinion, is exploit the fears of people.  Fears?  What fears?  Cultural/racial/situational insensitivity comes to mind.  I don’t see any reason why the race card should have been pulled in this situation.  I may be insensitive myself, here, thinking that there is no problem when, in reality there is a major one.  I just don’t know the lay of the land as far as disabled access and racialized access is concerned; but to think that one’s disability or race or financial situation dictates how they approach a scholarly site like NINES, say, ignores a very pertinent aspect of the conversation, and that is the temporal situation of said person performing the research, specifically that they are in the academy in some capacity, be they teachers, researchers, or students, and got to this level by some means.  Casual internet visitors are not going to visit NINES.  There is no reason for it.  There is an agenda behind the traffic on these sites, namely scholarship, and as stated before, scholars, at any level and from any background, have options. 
A high school kid with a C+ average and a garage band will not visit NINES.  They will visit Pandora and Yahoo and Amazon, though, so perhaps Williams' racialized argument should be directed toward those sites.

But I could be wrong.

Canon in D. Major

Our discussion in the last class meeting was terrific!  I have no other words for it:  It was terrific.  I think this is because the readings for this week were a bit more in our comfort zone as English Majors and a bit less involved with the digital world in general.  Matthew Wilkens’ article, “Canons, Close Reading, and the Evolution of Method” dealt with our old notion of Canon(s) and Canon construction, and complicated the matter by noting just how many new books are added to the back log of texts on a yearly basis.  This is what he describes as a “problem of abundance” (250) and with ever more novels being published each year, our old way of reading—close reading—needs to evolve to keep up with the sheer volume of books yet to be encountered.  The books are able, he posits, to tell us something about the culture that produced them, and since we cannot read them all in order to extract this information, we should be devising modern, technologically assisted means of digesting these incalculable pages.  His “anything and everything else” to “data mine” these texts include, as he shows the reader through an extended example and figures in the text, as distant reading (for a good explanation click here). 

I really do like the idea of distant reading, though as a supplement to close reading, not as an alternative.  The example that Wilkens provides takes all of the books, popular and obscure, published in a 25 year window starting in 1851, and data mines them to find Place Names, foreign and domestic, and plots these mentions (or multiple mentions) on a map.  The purpose of this exercise is to note how often places are named or written about in an era thought by scholars to be dominated by the Northeast.  What this example shows is that the culture that produced all of these texts had in their minds places overseas, in Europe and Asia, several mentions of South America, and plotting in Australia.  This shows that texts in this period mention these places, shifting the primary focus from New England to the rest of the world—or so it would seem.
In our discussion, I had some questions of this practice.  The ability of technology to extract these very distinct instances and utterances (is utterance the right word for a text?) shows us, the scholarly reader, what, exactly?  That other places are mentioned?  What of the context of these mentions?  Were they casual in nature?  Hostile?  Romanticized?  What were these utterances and how do they illustrate the cultural attitudes of the time?  The concept seems a bit removed from what reading itself is to be—the encountering of a text.  Distant reading, in this sense, seems like a brief fly over.  One of my colleagues noted that the way that the programs work in this situation is very much like how Google Books works:  they find these words and send back results in the context of how they were found, so “Australia” would be found in the context of “Australia is an island peopled entirely of criminals” as opposed to simply as “Australia”.  This example is glib, I know, but the way that I read Wilkens’ article indicates that this example is ostensibly correct.  Wilkens never once mentioned text in context, only that the methodology of finding place names needed to be tweaked in that several places include proper names and this needed to be accounted for.  The reader sees three maps showing the locations, but not what these locations mean.  I appreciated that my colleague mentioned that the results would come in context, but the published example would have benefitted from proffering this information.  Another colleague of mine noted that data mining in this manner was helpful beyond the confines of the Literature profession, which, again, I am grateful to have heard, seeing as we are all lit. people and to think beyond our boundaries, at least for me, is difficult.  However, despite these few moments of relief, something was still bothering me:  Data mining can read a text, and even provide the statistician with context, but is in NO WAY capable of providing subtext.  What is meant by an utterance or turn of phrase cannot be picked up by a machine (anyone hear about how sarcasm doesn’t translate well in an e-mail will know that what I am saying here is true), so there is so much that data mining will miss in relation to the culture that produced these texts. 

To punctuate my final point above, I cited James Joyce as an example.  This did not go over well.  At least not initially.  As soon as Joyce was invoked, several of my colleagues jumped up and shouted (not really):  Joyce is perfect for data mining.  His work is so dense.  We would benefit from DH in his case. 

It seemed as though He was the first author of DH data mining.  I may have blasphemed and taken the Lord’s name in vain. 

I continued: Yes, his work is dense, and much of its depth lies in the subtext of his writing, the collective, cultural meanings that are revealed only after repeated readings.  These tend to be different from person to person, and in the case of universalities, they reveal themselves at different time and at different degrees.  How could/would data mining be able to pull back the layers of subtext when these layers are invisible to a mechanical eye?  No, we know nothing of Joyce—at least not all of it—and it is only through reading the text and mining it ourselves can the mysteries of Joyce be revealed.  Data mining, in this case, would be the Mountain top removal method, as opposed to the individual prospector.  Prospectors are in touch with the environment and readers are very similar, knowing the terrain of the text.  This insider knowledge, as far as the Literary Profession is concerned, should be at the fore, not the mechanized version.

Save that for sociology.


Sunday, September 30, 2012

Tech N9NE


This past week we looked into the NINES website (www.nines.org) in an attempt to see what the site could do for our understanding of the Digital humanities as well as what it could do for us in our own research.  The discussion was terrific, despite the fact that I was the one tasked with leading it, but I am not sure if we were able to come to a consensus as to what the site could do for us. 
In my own private exploration of the site, I ignored the who and the how, that is, who is responsible for the site and how does it run.  I am not really all that interested in knowing how my tools work, as Stephen Ramsay and Geoffrey Rockwell note, because a “well-tuned instrument might be used to understand something, but that doesn’t mean that you, as the user, understand how the tool works” (80).  How does knowing that the site was developed by Jerome McGann (http://en.wikipedia.org/wiki/Jerome_McGann) assist me in using it for my research?  So I jumped right into the site and tried to use it as intended—for research.  I had just read Martin Delany’s Blake, so I decided to use this text as the title/keyword for my search.  The results were varied.  The first time I attempted the search, what returned was:

No NINES objects fit your constraints. Remove Last Constraint.

So I did, and tried the search again, but using only the title, the result of which was:

No NINES objects fit your constraints. Remove Last Constraint.

How could I remove the last constrain?  It was the ONLY constraint!  So I opted to start all over again, only this time I would not try to start with the advanced search.  I went to the home page and typed in the title of the text into the generic search bar and after a few moments of waiting (the search was obviously working as it was taking so long to load), I was given a long list—over a hundred titles—of articles and reviews dealing with the text or, more commonly, dealing with Martin Delany.  The links took me to JSTOR and Project Muse, but as I was not signed into these sites through my university’s library page, all I was able to see was the citation and the first page of the article. 
OK, thought I, NINES can help me find things that I could find though JSTOR and Academic Search Complete (and EBSCO hosted interface).  How does this help me seeing as I could have done this on my own, independent of NINES?  I concluded that, in this case, I was not being helped. 
Perhaps the issue is due to the subject.  OK, thought I, I will change it to something a little more Victorian.  Charles Dickens seems about right.  

Search Reults (19,594) 

This is FAR too many results, so I added the constraint “Women” 

Search Reults (2,369) 


This is FAR too many results, so I added the constraint “Insanity” and got back:

Search Query
Add new search criteria or select limiters to refine your search

Search Term
Blake
Remove Term
Search Term
or
Remove Term
Search Term
the
Remove Term
Search Term
Huts
Remove Term
Search Term
of
Remove Term
Search Term
America
Remove Term

No NINES objects fit your constraints. Remove Last Constraint.

Notice how the platform reverted back to my Blake search?  In one click my search on this gorgeous web site prompted me to think two things:
  • 1  Nuts to this, I’m using EBSCO; and
  • 2  What a pretty site.  It’s a shame that I won’t ever be using it.

Johanna Drucker notes the “persuasive and seductive rhetorical force of visualization performs such a powerful reification of information that graphics such as Google Maps”—or in this case, NINES, “are taken to be simply presentations of ‘what is’” (86).  If I am reading this correctly, and I like to think that I am, she is asserting that the way a digital artifact looks is more important to our (the user’s epistemology than the actual information that the artifact presents.  I categorically reject this assertion on a scholarly level, but recognize its truth on a visceral, consumer level.  Drucker is pointing out that a site like Google Maps, or NINES, trades more on its aesthetics rather than its knowledge dissemination.  I am guilty of falling for this marketing trap; but I have realized, after having poked around on the site, that no matter how the site looks, if it cannot get me to the information I am seeking it is of no use to me as a scholar.
But what about what the site is attempting to do, as a site itself—not the applications that I am using it for?  What about this dimension?  The artifact is here now, but where are my results?  Our class conversation touched on this concept, and attempted to batten-down an answer and only got so far as recognizing that:  “AJ wants the site to work better, and Brandon(http://www.pixelscholars.org/brandongalm) says ‘Just wait, it will be better’” .
Frankly, I am not convinced that NINES will be better; but I am not really all that pessimistic.  If the technology of NINES can do what the site purports it can, there are several scholarly applications that can be derived from the site.  Time will tell, I suppose; but there is precious little time!

*Note: all citations above come out of Debates in the Digital Humanities, Ed. Matthew K. Gold (http://www.2shared.com/document/5HbhJuMi/Debates_in_the_Digital_Humanit.html)



Saturday, September 22, 2012

The History Boy

This week’s readings and in-class discussion were probably the best that we have had in the past few weeks in that they attacked the issue of what the digital humanities are in a very specific way.  We started out with a tutorial on programming language which was extremely eye-opening.  We got the gist of creating a “static website” using html, and learned a bit about how to make the site more “dynamic.”  Beyond that, we jumped into the digital world.  The lesson helped; at least it did for me, to understand digital language in a practical way, and to understand the use of metadata for a site or a search, as well as how we can use the most basic programming language to physically see how the web page works and what goes into creating it.  Practical application is, I think, the only way for someone as thick as I am when it comes to the digital world, to understand the methods and methodology behind a DH project. 
So, Thank You, Adam.
The second half of the class we discussed the intro and first chapter of James Mussel’s The Nineteenth-Century Press in the Digital Age.  What made these reading so incredibly informative was their use of parallels as opposed to differences betwixt the physical, “analog” artifact (in this case, periodicals from 19th-C Britain), and the digitized version(s) found at several different sites, including http://www.victoriandatabase.com/index.cfm  and http://www.amdigital.co.uk/collections/Victorian-Popular-Culture.aspx which are repositories of SEVERAL digital versions of newspapers, periodicals, playbills, advertisements for diversions, etc.  Mussel asserts that the print material from the Victorian age are similar to contemporary digital productions in their use of new technology, their culture-shaping and culturally shaped position, as well as their proliferation.  Reproductions of 19th-Century periodicals has, in times past, ignored their larger cultural position by privileging the text (that is, the “articles” and fiction common in these publications) over the items surrounding the text—miscellany, current events, weather forecasts, etc.  (Mussel 31).  This is a very anthropological and New Historical view of literary studies. 
The parallel referred to above derives from Mussel’s collapsing the two productions—periodicals and their digital versions—noting how the printed versions and the contemporary preservation via digitization, includes this miscellany, etc. so as to enable the contemporary scholar to understand a day in this life of a Victorian, and to understand some of the events and news, popular culture and popular stories that would have shaped the minds of writers of text  who have benefitted from the privilege afforded to them by editors of collections and scholars of the past.  The way I view it, the DH projects of today are concerning themselves with the cultural make up of texts we prize over simply fetishizing the author(s), and in so doing recognize the collaborative efforts of publishers, printers, editors, paper-makers, livery drivers, retailers, readers, AS WELL AS writers that culminate in the production of a finished text.  By highlighting this interrelationship in an historical fashion, Mussel made me see the purpose of a DH project by showing that it is a collaborative, interdisciplinary endeavor.  I do not think that this message would have hit me so forcefully had it not been paralleled with print history in a time that I, as a literary scholar, am familiar with. 
Simply put, by taking the digital out of the equation, I am better equipped to understand the concepts and add the digital in my own time, getting a grip on the subject before it is problematized by the DI, W3C, Metadata, XML, CMS, html, PHP, ETC.

Thursday, September 13, 2012

Interoperability and Forgetting about the User

                The first meeting of our class, Digitizing the Victorians, two Tuesdays ago, left me and some of my colleagues a bit in the dark.  I do not mean to say that we were disappointed, but only say this because the topic for discussion was “Defining the Digital Humanities,” and we left with no real consensus as to what the digital humanities was.  We realized that the term is in the state of praxis, and that the definition(s) are not necessarily outlined in a specific way, but are brought out in the course of engaging in the field of the digital humanities.  This leads me to believe that this emerging field, which is at the forefront of our academic field and which has the potential to generate a great deal of money and “buzz” for the institutions that embrace it—or just note its existence—can be defined much like Associate Justice Potter Stewart’s approach to pornography:  I can’t define what it is, “But I know it when I see it (Jacobellis v. Ohio, 1964). 
                So, for now, I will operate under the assumption that I will know the digital humanities when I see it.
                This outlook was actually quite freeing.  When next the class met, I was less concerned with defining the field and more concerned with simply jumping in and interacting with a digital project or five.  We read a few articles and looked into a few websites, to see how they worked.  As far as the articles were concerned, some were EXTREMELY helpful and others were a mire of technical jargon and presuppositions which made me feel, once again, that I was not equipped to be part of the cutting edge.  One of my colleagues is a Medievalist—I’m an 18th Century British scholar—and we both noted, during discussion that the technology of today is akin to the technological advances of Gutenberg’s moveable type.  Nothing has changed in the last 500+ years, but everything has.  Our resistance, or shall I say MY resistance, comes from the same place as the resistance to previous historical technological “leaps forward”.  John Walsh addresses this history briefly at the outset of his article, “Multimedia and Multitasking: A Survey of Digital Resources for Nineteenth-Century Literary Studies”  (http://tiny.cc/ivp9iw).  I was again comforted by the fact that I was a) not alone in my feelings; and b) part of a historically ongoing practice in the field of literature and humanities—constant upgrades and changes are what makes for a richer, more engaging field.  What Walsh did, also, was provide me with a list of digital projects that are in existence now as a guide to what the field can do and is doing for the works of Victorian thinkers and writers.  Andrew Stauffer performs a very similar task in his review on NINES, providing an annotated bibliography of projects and resources online that show what the digital humanities is capable of doing (http://www.nines.org/exhibits/VLC_review)
                With much of my apprehension gone, I would like to provide a brief anecdote about my recent foray into the digital humanities, which is influenced a bit by Howard Besser’s article on digital libraries (http://tiny.cc/x5r9iw). I am a graduate assistant for a professor here at the Indiana University of Pennsylvania (IUP) and we are starting out on a new digital project to transcribe a nearly two hundred year old field journal.  We are using a new program, online based, to read the digital images of the artifact and to aid in the transcription process.  I have been tasked with figuring out how the program works.  I was given a snapshot of its processes during a meeting at the school’s Digital Humanities Center, and thought, “OK, this should be fun!”  Of course, the snap shot was just that—a snap shot; and the hardware being used wasn’t mine, but belonged to the DHC.  I went home and thought that I could do this. 
                I logged onto the site and there was an immediate issue—my web browser did not support the program.  I needed to download another browser.  I tried and I tried, but my hardware and the new soft ware were not jibing.  An hour and a half later I got Google Chrome running installed and running on my laptop and I reentered the site and the program was running beautifully.  This experience is similar to the issues that faced early online libraries and digital collections ten or fifteen years ago, where there was a lack of what Besser called interoperability. My tools did not sync-up with the online tools, so an immediate switch to another system was needed, much to my aggravation.  But moving on.
                I started to work with the program, poking around to see how it worked, and I guess I was missing something.  I was trying to define terms, figure out what each application was for and what I did when X was clicked, or Y was moved.  I was experiencing a disconnect, however, because there was no real place within the program that would explicate what the different attributes meant or were used for.  I continued to play around with the tools, trying to establish image boxes around lines of text on the image, preparing to transcribe them, but every box I rendered caught only a portion of the text as the original text (handwritten) slopped up and the lines were very close to one another.  The program was only reading some, maybe a third, of the lines I was creating, and was ordering them in what I could only understand as randomly.   I tried and tried to correct this, and finally, after about an hour of drawing and redrawing lines, said, “Forget it!” and was reduced to working within the program’s boundaries.  I resigned myself to using the lines it could read and attempted to start transcribing.
                Transcribing in XML is something that I am a little more familiar with, using the oXygen Program, so I thought that this process would be similar.  How wrong I was.  The transcription box in the online program was just that, a box that corresponded to one of the lines I attempted to plot.  The first line, according to the program was not the first line of the text image which was the first line that I rendered.  Odd, I thought, but let’s just see how this works.  Well, it didn’t.  I tried to encode, but I needed to encode manually, which is no great issue, though there are tabs in the transcription box that are supposed to be “place savers” for frequently used tags.  I attempted to use these tabs to no avail.  There was nothing that told me how to manipulate the tabs, nothing to point me in the direction of proper use.  Even just playing with them produced nothing that could be accurately called “useful”.  The tabs, then, remained adornments to the site as far as I was concerned as opposed to tools for my use. 
                I transcribed 5 lines, out of order according to the program’s reading of my plotted line boxes, and tried to preview what the encoded text would look like, and it was a disaster.  Nothing made sense, but I plodded on and tried to save the work, just to see how saving worked.  Again, nothing was at all user friendly.   I had no idea as to how I could save, where it was saving to, what it was going to be called, and worse yet, how I was going to find it when next I started to work.  The program was, in my opinion, LESS THAN user friendly.  Besser speaks of interoperability, but does not really address the end user, that is, someone like me who would use the digital product for work or research.  I have found that working in the Digital Humanities requires that the product—the published project put online for “use”—should keep a keen eye to user operability.  Our class discussion noted this lack of attention to the user in the texts that we read for class, ostensibly to aid in our understanding of the field and its productions.  We thought it odd that the implied user never really came up in the literature.  As I found out, in my own encounter with digital tools for scholarly, humanistic work, transparency and clarity are deeply needed. 
                I have a meeting today with my professor/boss.  I will tell him about my experiences, and together I hope that we can either figure out this program, or work together to realize a different strategy to moving forward with the project.  We need something that is user-friendly, interoperable, and productively serves the project at hand.

Sunday, September 9, 2012

Welcome to the DigiDome

This, my fellow sojourners in the digital humanities, is the first post on my first blog.  I look forward to reading about everyone else's experiences with the subject and, getting meta, on their experience blogging about the class.