Monday, October 8, 2012

Access Baggage


Accessibility is an important aspect to ANY design, not just in the realm of the Digital Humanities.  This is a concept put forth by George Williams in his article, “Disability, Universal Design, and the Digital Humanities,” but there are some instances in his article that I take issue with.  I used to work in the hotel industry, specifically on the design and renovation side of things and am aware of the restrictions that face owners and developers when it comes to accommodating the disabled. IRL, the accommodations are quite labor intensive, raising a sink or lowering a bed, building a ramp, or swapping out fixtures that do not comply with the ADA guidelines.  The reason for extensive renovation and construction has to do with the fact that these specific accommodations did not exist prior, and cannot possibly be dealt with by the individual with the disability on his or her own (a person in a wheelchair cannot maneuver up stairs without a ramp and cannot use a restroom stall that is too small for the chair). 
In the case of disabled people in the digital world, these same limitations of the constructed environment do not apply.  If a blind person is at home and uses their computer on a regular basis, be it for scholarship or simply to listen to Pandora or shop on Amazon, the individual has equipped themselves with special hard and software that allows them to navigate a variety of virtual worlds.  They would already have talk to text technology or a refreshable Braille readout (which I had never heard of until Williams’ article).  In fact, their self-procured apparatuses would be more useful to these individuals than a “universal design” created by a site developer.  The insinuation of Williams’ article is that the current crop of digital humanist projects should keep in mind accessibility for persons with disabilities, creating programming issues and operability issues for developers and users alike when different programs and applications are introduced to the same site.  What I am pointing out are the possible (probable) glitches of developing a site and the possible (probable) redundancies of accessibility. 
If an individual already possesses an after-market text reader, why would a site need to provide one that is far inferior to the one already in the user’s possession?  It may seem like I am generalizing on the part of the disabled community, but in the case of a blind person who uses their computer workstation as much as a seeing person, like me or my colleagues, wouldn’t they be equipped with the tools that allow them to navigate the digital world just as I have? 
I may be nit picking a bit on the ADA compliance of web sites, so I will move on to another type of access issue cited by Williams.  The third reason Willams proposes for a universal Design is physical accessing, that is, how people access the web through mobile devices.  He cites some statistics on the number of people in certain demographics who access the web via mobile phones or tablets, and that design should keep this in mind.  He then goes on to cite that those more likely to use a mobile device for online access include African Americans, Hispanics, and individuals from lower-income households” (206).  The reason for bringing these statistics into the conversation is that, “if the digital humanities are to create resources accessible by a diverse array of people, then compatibility with mobile devices is necessary” (206).  OK, I see what you are doing here, but I can only feel that the statistics are inapt and reveal nothing. 
The numbers say that a large majority of Black, Brown, and poor people (I do appreciate how the first two demographics were diplomatically separated from the third, though placed in close proximity to indicate that the conflation of race and class is ever-present) access the web via mobile device.  Access it how, is my question.  Are these demographics supposed to show academic research, the kind that NINES and Old Bailey Online, being accessed?  Or merely the usual access to the web that the majority of people with mobile devices engage in, that is Facebook, Yahoo, Pandora, Wikipedia, etc.?  Are we supposed to believe, from these statistics, that those who access the web via phones or tablet could only perform research via these devices?  The implication is that forthcoming DH projects should keep in mind mobile devices so as not to ignore these demographics that have been overlooked sociologically for decades and centuries, and to account for them in the realm of the DH is of utmost importance.  I am not saying that the DH should ignore these demographics, but I am saying that invoking the demographic access to the web is specious.  Black and Brown and Poor users may access the web primarily via mobile devices more often than their white and higher socio-economic counterparts, but there is a rather large jump from common access and research. 
 What I mean here is that a person of color or of certain economic means would access the MLA bibliography, or WorldCat, or NINES in much the same way I—a white, middle-class male—would.  I do not make this generalization casually.  I simply do not think that ANYONE, regardless of race or status, would perform any rigorous academic research via a phone.  There is no sense in this practice as the interface is too small to be efficient, the memory is too limited.  It would be like a carpenter building a house with a child-size hammer.  It could be done, but makes no sense and would simply take forever.  Especially when there are larger, more appropriately-sized tools at the builder’s disposal. 
But the statistics say that the larger, more appropriate tool is not available to the specific demographic!
Really!?  Why would ANYONE be performing scholarly research?  School comes to mind, and at every institution I have been to there are slews of computers available for student use—for free.  Heck, as a middle-class, white male I only got my first laptop when I was 26.  And I wrote it off as a school expense.  If someone was performing the sort of online activity that Williams is talking about in his article, there are several very likely solutions to the issue of primary access to the web that universal design for this specific demographic  (that is, people who use mobile devices) is extraneous. 
My only purpose in this post is to point out how theory and theoreticians tend to find issues with things that common sense can address.  I like the idea of inclusive design.  I like being inclusive and taking into account the limitations and advantages of people.  What Williams’ article seemed to do, in my opinion, is exploit the fears of people.  Fears?  What fears?  Cultural/racial/situational insensitivity comes to mind.  I don’t see any reason why the race card should have been pulled in this situation.  I may be insensitive myself, here, thinking that there is no problem when, in reality there is a major one.  I just don’t know the lay of the land as far as disabled access and racialized access is concerned; but to think that one’s disability or race or financial situation dictates how they approach a scholarly site like NINES, say, ignores a very pertinent aspect of the conversation, and that is the temporal situation of said person performing the research, specifically that they are in the academy in some capacity, be they teachers, researchers, or students, and got to this level by some means.  Casual internet visitors are not going to visit NINES.  There is no reason for it.  There is an agenda behind the traffic on these sites, namely scholarship, and as stated before, scholars, at any level and from any background, have options. 
A high school kid with a C+ average and a garage band will not visit NINES.  They will visit Pandora and Yahoo and Amazon, though, so perhaps Williams' racialized argument should be directed toward those sites.

But I could be wrong.

Canon in D. Major

Our discussion in the last class meeting was terrific!  I have no other words for it:  It was terrific.  I think this is because the readings for this week were a bit more in our comfort zone as English Majors and a bit less involved with the digital world in general.  Matthew Wilkens’ article, “Canons, Close Reading, and the Evolution of Method” dealt with our old notion of Canon(s) and Canon construction, and complicated the matter by noting just how many new books are added to the back log of texts on a yearly basis.  This is what he describes as a “problem of abundance” (250) and with ever more novels being published each year, our old way of reading—close reading—needs to evolve to keep up with the sheer volume of books yet to be encountered.  The books are able, he posits, to tell us something about the culture that produced them, and since we cannot read them all in order to extract this information, we should be devising modern, technologically assisted means of digesting these incalculable pages.  His “anything and everything else” to “data mine” these texts include, as he shows the reader through an extended example and figures in the text, as distant reading (for a good explanation click here). 

I really do like the idea of distant reading, though as a supplement to close reading, not as an alternative.  The example that Wilkens provides takes all of the books, popular and obscure, published in a 25 year window starting in 1851, and data mines them to find Place Names, foreign and domestic, and plots these mentions (or multiple mentions) on a map.  The purpose of this exercise is to note how often places are named or written about in an era thought by scholars to be dominated by the Northeast.  What this example shows is that the culture that produced all of these texts had in their minds places overseas, in Europe and Asia, several mentions of South America, and plotting in Australia.  This shows that texts in this period mention these places, shifting the primary focus from New England to the rest of the world—or so it would seem.
In our discussion, I had some questions of this practice.  The ability of technology to extract these very distinct instances and utterances (is utterance the right word for a text?) shows us, the scholarly reader, what, exactly?  That other places are mentioned?  What of the context of these mentions?  Were they casual in nature?  Hostile?  Romanticized?  What were these utterances and how do they illustrate the cultural attitudes of the time?  The concept seems a bit removed from what reading itself is to be—the encountering of a text.  Distant reading, in this sense, seems like a brief fly over.  One of my colleagues noted that the way that the programs work in this situation is very much like how Google Books works:  they find these words and send back results in the context of how they were found, so “Australia” would be found in the context of “Australia is an island peopled entirely of criminals” as opposed to simply as “Australia”.  This example is glib, I know, but the way that I read Wilkens’ article indicates that this example is ostensibly correct.  Wilkens never once mentioned text in context, only that the methodology of finding place names needed to be tweaked in that several places include proper names and this needed to be accounted for.  The reader sees three maps showing the locations, but not what these locations mean.  I appreciated that my colleague mentioned that the results would come in context, but the published example would have benefitted from proffering this information.  Another colleague of mine noted that data mining in this manner was helpful beyond the confines of the Literature profession, which, again, I am grateful to have heard, seeing as we are all lit. people and to think beyond our boundaries, at least for me, is difficult.  However, despite these few moments of relief, something was still bothering me:  Data mining can read a text, and even provide the statistician with context, but is in NO WAY capable of providing subtext.  What is meant by an utterance or turn of phrase cannot be picked up by a machine (anyone hear about how sarcasm doesn’t translate well in an e-mail will know that what I am saying here is true), so there is so much that data mining will miss in relation to the culture that produced these texts. 

To punctuate my final point above, I cited James Joyce as an example.  This did not go over well.  At least not initially.  As soon as Joyce was invoked, several of my colleagues jumped up and shouted (not really):  Joyce is perfect for data mining.  His work is so dense.  We would benefit from DH in his case. 

It seemed as though He was the first author of DH data mining.  I may have blasphemed and taken the Lord’s name in vain. 

I continued: Yes, his work is dense, and much of its depth lies in the subtext of his writing, the collective, cultural meanings that are revealed only after repeated readings.  These tend to be different from person to person, and in the case of universalities, they reveal themselves at different time and at different degrees.  How could/would data mining be able to pull back the layers of subtext when these layers are invisible to a mechanical eye?  No, we know nothing of Joyce—at least not all of it—and it is only through reading the text and mining it ourselves can the mysteries of Joyce be revealed.  Data mining, in this case, would be the Mountain top removal method, as opposed to the individual prospector.  Prospectors are in touch with the environment and readers are very similar, knowing the terrain of the text.  This insider knowledge, as far as the Literary Profession is concerned, should be at the fore, not the mechanized version.

Save that for sociology.