aboutbeyondlogin

exploring and collecting history online — science, technology, and industry

advanced

ECHO Blogging Central

Using Social Bridging to Be "For Everyone" in a New Way

Museum 2.0 - 15 hours 17 min ago
Like a lot of organizations, my museum struggles with two conflicting goals:
  1. The museum should be for everyone in our community.
  2. It's impossible for any organization or business to do a great job being for everyone. We're more successful when we target particular communities or audiences and design experiences for them.
How do you reconcile the desire to be inclusive with the practical imperative to target? In the past, I've subscribed to the theory that an organization should target many different groups and types of people to serve a constellation of specific audiences across diverse affinities, needs, and interests. 
But ultimately, that's still targeting. It's still grouping. And while it may be effective when it comes to marketing, it's limiting if your mission is to reach and engage with a wide range of people. It can lead to parallel programming: bike night for hipsters, bee night for hippies, family night for kiddies. And rarely the twain shall meet.
At the Santa Cruz Museum of Art & History, we're approaching this challenge through a different lens: social bridging. One of our core programming goals is to build social capital by forging unexpected connections between diverse collaborators and audience members. We intentionally develop events and exhibitions that matchmake unlikely partners--opera and ukelele, Cindy Sherman and amateur photographers, welding and knitting. Our goal in doing this work is to bring people together across difference and build a more cohesive community.  
We have been explicitly focusing on social bridging for more than a year now. What started as a series of experiments and happy accidents is now embedded in how we develop and evaluate projects. We've seen surprising and powerful results--visitors from different backgrounds getting to know each other, homeless people and museum volunteers working together, artists from different worlds building new collaborative projects. Visitors now spontaneously volunteer that "meeting new people" and "being part of a bigger community" are two of the things they love most about the museum experience.
This has led to a surprising outcome: we are now de-targeting many programs. This isn't just a philosophical shift--it's also being driven by visitors' behavior. "Family Art Workshops" suffer from anemic participation whereas multi-generational festivals are overrun with families. Single-speaker lectures languish while lightning talks featuring teen photographers, phD anthropologists, and professional dancers are packed. Programs that emphasize bringing diverse people together are more popular than those that serve intact groups. Why fight it?
And so, while we continue to acknowledge that specific communities have particular assets and needs, we spend more time thinking about how to connect them than how to serve each on its own. We're comfortable being deliberately unhip if it means that a seven year old, a seventeen year old, and a seventy year old all feel "at home" at the museum. This approach allows us to sidestep the question of parallel versus pipeline programming and instead create a new pipeline that is about unexpected connections and social experiences.
Focusing on social bridging also leads to tricky questions as to how we develop new programming, especially when it comes to outreach. When we offer programs at a school or neighborhood festival or community center, we do it to work with the group who live or learn there. Ironically and somewhat depressingly, our partnerships with marginalized communities often involve more segregated work because of our desire to engage in their space, on their terms. There are some groups who we work with terrifically in their own space but who we rarely engage in ours. This leads to good bonding, but very little bridging.
I don't have the answer to how we can incorporate bridging across the various ways we work with intact and blended communities. When it comes to school programs, we are now actively exploring how our approach might shift to emphasize bridging--among students in the same school, among students from different schools, among students across their school and home life. When it comes to working with intact cultural and ethnic communities, one of the resources that is helping me think through these questions is a 2004 paper by Dr. Pia Moriarty on Immigrant Participatory Arts in Silicon Valley. In the paper, Dr. Moriarty puts forward a paradigm of "bonded-bridging" to describe the way that ethnically-identified programs and organizations contribute to bridging in a majority-immigrant community. It's a thoughtful and intriguing paper, and I encourage you to read it.
I'm still chewing on the idea of "bonded-bridging" and the limitations and possibilities of a bridging strategy in a diverse community. But for now, I'm happy that we've been able to address some of our hand-wringing over targeted programs and inclusion with an approach that serves both our visitors and our core goals.
Does social bridging make sense for your institution? How do you reconcile inclusion and targeting in program design?

To MOOC or Not to MOOC? What’s In It For Me?

edwired - Tue, 05/07/2013 - 14:25

The title of this post is purely rhetorical because no one has asked me to teach a MOOC. In fact, I have not been involved with MOOCs at all, except as an observer from afar. Instead, the title is the result of me wondering why anyone would teach a course with tens of thousands of students enrolled (maybe more), who you would never meet, and for which there is an enormous amount of start up effort (designing the course, filming the lectures, figuring out the grading algorithms, etc., etc.)?

I understand why universities want to get MOOCs out there with their most prominent professors teaching them. Having a big name professor offer a MOOC brings many, many eyeballs to your campus logo (and even better to the website) and helps burnish your image in a global market for higher education. In short, MOOCs are marketing dollars well spent, even if they aren’t yet showing any sign they are good for the bottom line, given the terms that companies like Coursera are offering colleges and universities.

But why would a professor, especially a prominent (and presumably busy) professor, bother to spend all the time and effort necessary to bring a MOOC to market and then, one assumes, have some connection to its implementation? After all, designing a new course or redesigning an old one takes a lot of time in the analog world. When you consider the time required to film lectures, work with an editor to polish up that film and add in B-roll, design online assignments and assessments, and think through how students are going to progress through the various online materials, a MOOC represents a lot of time and effort.

After puzzling on this question, I can think of two answers.

The first is what we might call educational altruism. MOOCs offer faculty members a chance to make their courses available, for free, to the widest possible audience. As scholars we are supposed to be engaged in the circulation of knowledge, and being able to circulate one’s knowledge of a particular subject to 70,000 or 100,000 students, even if only a tiny fraction of them complete the course, is a potentially wonderful thing. I’m not sure that those students learn anywhere near what they would learn in a well designed face to face class, given that MOOCs largely replicate the lecture/listen binary model that is so ubiquitous in large American universities. That model has been demonstrated in countless studies by cognitive scientists to yield only minimal learning gains, even when taught by famous, or brilliant lecturers. But if the purpose of teaching a MOOC on one’s subject is to make one’s expertise in a given subject available, for free, to as many people as possible, that’s a laudable act. I’m not sure how much of this educational altruism there is out there, but I’m willing to admit that it might really exist.

The second reason is more mercenary and involves the sale of books and/or other collateral products. In particular, I wondered whether MOOCs offered faculty members an opportunity to make some serious money on the teaching and learning products that they have created?

To test my idea that book sales might just be part of the reason why some faculty members would teach a MOOC, I randomly selected eight courses across the disciplines and from various universities on the Coursera website. I tried to do the same thing at the Udacity site, but one cannot read the course syllabi there. What I found was that on all eight syllabi, the only readings students were expected to do were from free and open source/open access materials. However, five of the eight professors recommended or suggested as optional books that they had written, ranging in price from $8 to $110. One of the professors recommends only open source works, and the other two recommend books published by others for either $44 or $142.

If we assume for a minute that some fraction of the tens of thousands of students taking part in a given MOOC go ahead and purchase the “recommended” or “optional” book written by the professor teaching the course, the potential for significant earnings via book sales is very real. For the sake of argument, let’s say that I taught a MOOC that drew 50,000 students and I recommended as optional the ebook version of my new book ($19.95). And, for the sake of this same argument, let’s say that 10% of the students purchased a copy. Under the terms of my contract with the press, I would make just under $7,000 in royalties from the sale of those books. While $7,000 is not enough for the downpayment on that beach house I’ve been wanting, it’s still $7,000 in additional income.

Different states and different institutions have widely varying rules (and even laws) governing whether faculty members can require students to purchase a book from which the faculty member receives income. But those rules were made with the standard course for credit model in mind. MOOCs disrupt that model by not offering credit and in the cases I looked at, by having all textbooks be “recommended” or “optional.” Once MOOCs move to the credit bearing/tuition charging mode, it will be interesting to see whether there is any change in this approach. I suspect there won’t be, if only because the openness of a MOOC begins to break down once it starts to get expensive for students.

Open Thread: Your Stories of Risk and Reward

Museum 2.0 - Wed, 05/01/2013 - 17:37

What's the biggest professional risk you've taken? What happened after you took the risk? 
In three weeks, Kathleen McLean and I are co-hosting a freewheeling talk show at the American Alliance of Museums conference. The theme is "risk and reward," and we plan to explore both individual and institutional relationships to risk-taking. 
Kathy and I have each spent a lot of time advocating for experimental practice and risk-taking in museums, both as consultants and on staff. We've seen the mixed results--lots of excitement, lots of push back, some progress. For me personally, risk-taking has led to incredible professional opportunities, for which I feel lucky and grateful. I'm particularly indebted to Anna Slafer, my amazing boss at the Spy Museum in the mid-2000s. Anna would kick me under the table when I shared ideas out of turn, yet she also fiercely defended me (and our whole team) so we could do creative, risky work.
But many organizations don't have an Anna. Many people struggle with fears of punishment or marginalization for taking risks. It's hard for me to evaluate the extent to which these fears are well-founded, and whether the climate for risk is changing in the arts sector broadly. 
So I'm curious: what is your experience? Did you or your institution take a risk that got rewarded? Punished? Ignored? 
Please share your story in the comments. 

And if you're coming to Baltimore, please join us on Wednesday May 22 at 10:15 for a lively conversation informed by your stories. 

Newsletter from Medical Museion

Biomedicine on Display - Tue, 04/30/2013 - 09:02
Click here for the newsletter, in Danish and English.

4th newsletter from Medical Museion in 2013.

  • “Explore the substance and science of fat” – numbers are limited for this hands-on event.
  • “Under The Skin: Follow the construction of the new exhibition” – Follow the process.
  • “Web exhibition: behind the scenes on ‘Biohacking – Do It Yourself!’” – Explore the field of biohacking.
  • “Save the date! The Data Body On The Dissection Table” – Event on the data body on June 4th.

If you want to receive future versions sign up for our mailing list here.

Auf Wiedersehen, Mein Freund

edwired - Mon, 04/29/2013 - 21:34

Over the weekend my friend and colleague Peter Haber passed away after an extended illness. I was only fortunate enough to know Peter for the past four years, but I benefitted greatly from his friendship, his collegiality, his ideas, and his good humor.

Like my former colleague Roy Rosenzweig, Peter was a “connector” — one of those people who brought others together for the benefit of everyone. Through Peter I have met and begun to work with a number of colleagues in Switzerland and Austria, colleagues I never would have met otherwise. More importantly, though, my understanding of digital history and digital humanities is so much richer for having read Digital Past. Geschichtswissenschaft im digitalen Zeitalter (2011). What Peter brought to the study of digital history was a scientific rigor, a style of analysis, that is so often lacking in English language scholarship on our field. If I could quibble with one thing about the edition of the book that I own, it is the photograph of Peter on the back cover. In that photo, he seems dark and mysterious. Those who knew him well, know he was anything but dark or mysterious.

Perhaps the most tangible evidence of Peter the Connector is his co-authored volume (with Marin Gasteiner), Digitale Arbeitstechniken (2010). When I read these essays I came away with a much better sense of the kinds of work being done by my German-speaking colleagues in digital history — work I would likely not know if Peter and Martin had not collected it. More importantly, though, I began to think about several issues near and dear to me in new and different ways. That is what the best scholarship does for us.

But really, Peter’s greatest academic contribution, in many ways, has been Hist.net, perhaps the longest-lived digital history blog in any language. With his close friend and collaborator Jan Hodel, Peter spent more than a decade making all things digital and historical available and accessible to a wide audience. I knew of the blog before I knew Peter and Jan, and one of my happiest professional moments was the day I received an email from the two of them inviting me to speak at a conference in Basel. For my own family health reasons, I couldn’t attend that meeting and so I was very pleased (and relieved) when they kindly invited me back the following year to speak in Basel. That meeting was the starting point of our three way friendship and collaboration on Global Perspectives on Digital History, a project that kept us connected until he became too sick to continue.

One the most enjoyable days I’ve spent in the past several years was with Peter, when he was still feeling fine, touring the Fondation Beyeler, then returning to Basel for a coffee. That is the Peter I will remember. But I will also remember the Peter who, when you said something he didn’t entirely agree with, would cock and eyebrow, pause, and then ask a probing question that politely disagreed, while trying to find a way that the two of us could agree. I will miss both of those Peters very much.

 

Itinera Nova in the World(s) of Crowdsourcing and TEI

Collaborative Manuscript Transcription - Mon, 04/29/2013 - 18:23
On April 25, 2013, I presented this talk at the International Colloquium Itinera Nova in Leuven, Belgium. It was a fantastic experience, which I plan to post (and speak) more about, but I wanted to get my slides and transcript online as soon as possible.

Abstract: Crowdsourcing for cultural heritage material has become increasingly popular over the last decade, but manuscript transcription has become the most actively studied and widely discussed crowdsourcing activity over the last four years. However, of the thirty collaborative transcription tools which have been developed since 2005, only a handful attempt to support the Text Encoding Initiative (TEI) standard first published in 1990. What accounts for the reluctance to adopt editorial best practices, and what is the way forward for crowdsourced transcription and community edition? This talk will draw on interviews with the organizers behind Transcribe Bentham, MoM-CA, the Papyrological Editor, and T-PEN as well as the speaker's own experience working with transcription projects to situate Itinera Nova within the world of crowdsourced transcription and suggest that Itinera Nova's approach to mark-up may represent a pragmatic future for public editions.
I'd like to talk about Itinera Nova within the world of crowdsourced transcription tools, which means that I need to talk a little bit about crowdsourced transcription tools themselves, and their history, and the new things that Itinera Nova brings.
Crowdsourced transcription has actually been around for a long time. Starting in the 1990s we see a number of what are called "offline" projects. This is before the term crowdsourcing was invented.
  • A Dutch initiative: Van Papier naar Digitaal which is transcribing primarily genealogy records. 
  • FreeBMD, FreeREG, and FreeCEN in the UK, transcribing church registers and census records. 
  • Demogen in Belgium -- I don't know a lot about this -- it appears to be dead right now, but if anyone can tell me more about this, I'd like to talk after this. 
  • Archivalier Online--also transcribing census records--in Denmark, 
  • And a series of projects by the Western Michigan Genealogy Society to transcribe local census records and also to create indexes of obituaries.
One thing these have in common, you'll notice, is that these are all genealogists. They are primarily interested in person names and dates. And they emerge out of an (at least) one hundred year old tradition of creating print indexes to manuscript sources which were then published. Once the web came online, the idea of publishing these on the web [instead] became obvious. But the tools that were used to create these were spreadsheets that people would use on their home computers. Then they would put CD ROMs or floppy disks in the posts and send them off to be pubished online.
Really the modern era of crowdsourced transcription begins about eight years ago.  There are a number of projects that begin development in 2005.  They are released (even though they've been in development for a while) starting around 2006.  Familysearch Indexing is, again, a genealogy system primarily concerned with records of genealogical interest which are tabular.  It is put up by the Mormon Church. 

Then things start to change a little bit.  In 2008, I publish FromThePage, which is not designed for genealogy records per se -- rather it's designed for 19th and 20th century diaries and letters.  (So here we have more complex textual documents.)  Also in 2008, Wikisource--which had been a development of Wikipedia to put primary sources online--start using a transcription tool.  But initially, they're not using it for manuscripts because of policy in the English, French, and Spanish language Wikisources.  The only people using it for manuscripts are the German Wikisource community, which has always been slightly separate.  So they start transcribing free-form textual material like war journals [ed: memoirs] and letters.  But again, we have a departure from the genealogy world.

In 2009, the North American Bird Phenology Program starts transcribing bird observations.  So in the 1880s you had amateur bird-watchers who would go into the field and they would record their sightings of certain ducks, or geese, or things like that, and they would record the location and the birds they had observed.  So we have this huge database of the presences of species throughout North America that is all on index cards.  And as the climate changes and habitats change, those species are no longer there.  So scientists who want to study bird migration and climate change need access to these.  But they're hand-written on 250,000 index cards, so they need to be transformed.  So that requires transcription, also by volunteers. [ed: The correct number of cards is over 6 million, according to Jessica Zelt's "Phenology Program (BPP): Reviving a Historic Program in the Digital Era"]
2010 is the year that crowdsourced transcription really gets big.  The first big development is the Old Weather project, which comes out of the Citizen Science Alliance and the Zooniverse team that got started with GalaxyZoo.  The problem with studying climate change isn't knowing what the climate is like now.  It is very easy to point a weather satellite at the South Pacific right now.  The problem is that you can't point a weather satellite at the South Pacific in 1911.  Fortunately, in many of the world's navies, the officer of the watch would, every four hours, record the barometric pressure, the temperature, the wind speed and direction, the latitude and the longitude in the ships logs.  So all we have to do is type up every weather observation for all the navies' ships, and suddenly we know what the climate was like.  Well, they've actually succeeded at this point -- in 2012 they finished transcribing all the British Royal Navy's ships log weather observations during World War I.  So this has been very successful -- it's a monumental effort: they have over six hundred thousand registered accounts--not all of those are active, but they have a very large number of volunteers. 
Also in 2010 in the UK, Transcribe Bentham goes live.  (We'll talk a lot more about this -- it's a very well documented project.)  This is a project to transcribe the notes and papers of the utilitarian philosopher Jeremy Bentham.  It's very interesting technically, but it was also very successful drawing attention to the world of crowdsourced transcription.
In 2011, the Center for History and New Media at George Mason University in northern Virginia published the Papers of the United States War Department, and builds a tool called Scripto that plugs into it.  Now this is primarily of interest to military and social historians, but again we're getting away from the world of genealogy, we're getting away from the world of individual tabular records, and we're getting into dealing with documents.
Once we get there, we have a tension.  And this is a pretty common tension.  There's an institutional tension, in that editing of documents has historically been done by professionals, and amateur editions have very bad reputations.  Well now we're asking volunteers to transcribe.  And there's a big tension between, well how do volunteers deal with this [process], do we trust volunteers?  Wouldn't it be better just to give us more money to hire more professionals?  So there's a tension there.

There's another tension that I want to get into here, since today is the technical track, and that's the difference between easy tools and powerful tools, and [the question of] making powerful tools easy to use.  This is common to all technology--not just software, and certainly not just crowdsourced transcription--but it's new because this is the first time we're asking people to do these sorts of transcription projects. 

Historically these professional [projects] have been done using mark-up to indicate deletions or abbreviations or things like that. 
So there's this fear: what happens when you take amateurs and add mark-up?

Well, what is going to happen?  Well, one solution--and it's a solution that I'm distressed to say is becoming more and more popular in the United States--is to get rid of the mark-up, and to say, well, let's just ask them to type plain text. 
There's a problem with this.  Which is that giving users power to represent what they see--to do the tasks that we're asking them to do--enables them.  Lack of power frustrates them.  And when you're asking people to transcribe documents that are even remotely complex, mark-up is power.
So I'm going to tell a little story about scrambled eggs.  These are not the scrambled eggs that I ate this morning--which were delicious by the way--but they're very similar. 
I'm going to pick on my friends at the New York Public Library, who in 2011 launched the "What's on the Menu?" project.  They have an enormous collection of menus from around the world, and they want to track to culinary history of the world as dishes originate in one spot and move to other locations, the change in dishes--when did anchovies become popular?  Why are they no longer popular?--things like that.  So they're asking users to transcribe all of these menu items.  They developed a very elegant and simple UI.  This UI did not involve mark-up; this is plain-text.  In fact--I'm going to get over here and read this--if you look at this instruction, this is almost stripped text: "Please type the text of the indicated dish exactly as it appears.  Don't worry about accents." 
Well, this may not be a problem for Americans, but it turns out that some of their menus are in languages that contain things that American developers might consider accents.  This is a menu that was published on their site in 2011.  They sent out an appeal asking, "can anyone read Sütterlin or old German Kurrentschrift"?  I saw this and I went over to a chat channel for people who are discussing German and the German language, because I knew that there were some people familiar with German paleography there, and I wanted to try it out.
So the transcribers are going through and they're transcribing things, and they get to this entry: Rühreier.  All right, let's transcribe that without accents.  So they type in what they see.  Rühreier is scrambled eggs.  And what they type is converted to "Ruhreier", which are... eggs from the Ruhrgebiet?  I don't know?  This is not a dish.  I'm not familiar with German cuisine, but I don't think that the Ruhr valley is famous for its eggs.
And this is incredibly frustrating!  We see in the chat room logs: "Man, I can't get rid of 'Ruhreier' and this (all-capital) 'OMELETTE'!  What's going on?  Is someone adding these back?  Can you try to change "Ruhreier" to "Rühreier"?  It keeps going back!"

So we have this frustration.  We have this potential to lose users when we abandon mark-up; when we don't give them the tools to do the job that we're asking them to do.
Okay.  Let's shift gears and talk about a different world.  This is the world of TEI, the Text Encoding Initiative.  It's regarded as the ultimate in mark-up -- Manfred [Thaller] mentioned it some time earlier.  It's been a standard since 1990, and it's ubiquitous in the world of scholarly editing. 

Remember, up until recently, all scholarly editing was done by professionals.  These professionals were using offline tools to edit this XML which Manfred described as a "labyrinth of angle brackets."  It was never really designed to be hand-edited, but that's what we're doing. 

And because it's ubiquitous and because it's old, there's a perception among at least some scholars, some editors, that this is just a 'boring old standard'.  I have a colleague who did a set of interviews with scholars about evaluating digital scholarship, and not all but some of the responses she got when she brought up TEI were "TEI?  Oh, that's just for data entry."
Well, not quite.  TEI has some strengths.  It is an incredibly powerful data model.  The people who are doing this--these professionals who have been working with manuscripts for decades--they've developed very sophisticated ways of modeling additions to texts, deletions to texts, personal names, foreign terms -- all sorts of ways of marking this up. 

It has great tools for presentation and analysis.  Notice I didn't say transcription.

And it has a very active community, and that community is doing some really exciting things.

I want to use just one example of something that has only been around in the last four years that it's been developed.  It's a module that was created for TEI called the Genetic Edition module.  A "genetic edition" is the idea of studying a text as it changes -- studying the changes that an author has made as they cross  through sections and created new sections, or over-written pieces. 

So it's very sophisticated, and I want to show you the sorts of things you can do [with it] by demostrating an example of one of these presentation tools by Elena Pierazzo and Julie Andre.  Elena's at King's College London, and they developed this last year. 
This is a draft of--I believe it's Proust's Recherches du Temps Perdu--unfortunately I can't see up there.  But as you can see, this is a very complicated document.  The author has struck through sections and over-written them.  He's indicated parts moved.  He's even -- if you look over here -- he's pasted on an extra page to the bottom of this document.  So if you can transcribe this to indicate those changes, then you can visualize them.
[Demo screenshots from the Proust Prototype.] And as you slide, you see transcripts appear on the page in the order that they're created,

And in the order that they're deleted even.
There's even rotation and stuff --

It's just a brilliant visualization!

So this is the kind of thing that you can do with this powerful data model.  
But how was that encoded? How did you get there?
Well, in this case, this is an extension to that thousand-page book.  It's only about fifty pages long, printed, and it contains individual sets of guidelines.  In this case, this is how Henrik Ibsen clarified a letter.  In order to encode this, you use this rewrite tag with a cause...  And this is that forest of angle brackets; this is very hard.  And this is only one item from this document of instructions, which was small enough that I could cut it out and fit it on a slide. 

So this is incredibly complex.  So if TEI is powerful; and if, as it gets more complex, it becomes harder to hand-encode; and as we start inviting members of the public and amateurs to participate in this work, how are we going to resolve this? 
If there's a fear about combining amateurs and mark-up, what do we do when we combine amateurs with TEI?  This is panic! 

And it is very rarely attempted.  I maintain a directory of crowdsourced transcription tools, with multiple projects per tool.  And of the 29 projects in this directory, only 7 claim to support TEI. 

One of them is Itinera Nova.  I found out about this when I was preparing a presentation for the TEI conference last year, in which I interviewed people running projects doing this crowdsourcing, and found out about their experience of users trying to encode in TEI, and asked, "Do you know anyone else?"

And that's how I found out about Itinera Nova, which is unfortunately not very well known outside of Belgium.  This is something that I hope to part of correcting, because you have a hidden gem here -- you really do.  It is amazing.
So how do you support TEI?  Well, one approach--the most common approach--is to say we'll have our users enter TEI, but we'll give them help.  We'll create buttons that add tags, or menus that add tags.  This has been the approach taken by T-PEN (created by the Center for Digital Thelogy out of Saint Louis University), and a project associated with them, the  Carolingian Canon Law Project.  It's also the approach taken by Transcribe Bentham with their TEI toolbar.  Menus are an alternative, but essentially the do the same thing -- they're a way of keeping users from typing angle brackets.  So the Virtuelles deutsches Urkundennetzwerk is one of those, as well as the Papyrological Editor which is used by scholars studying Greek papyri.
So how well does that work?  You provide users with buttons that add tags to their text.  Here's an example from Transcribe Bentham. 
Here's an example from Monasterium.  And the results are still very complicated.  The presentation here is hard.  It's hard to read; it's hard to work with.

That does not mean that amateurs cannot do it at all!  Certainly the experience of Transcribe Bentham proves that amateurs to the same level as any professional transcriber, using these tools and coding these manuscripts, even without the background. 
But there are limitations.  One limitation is that users outgrow buttons.  In Transcribe Bentham, [the most active] users eventually just started typing the angle brackets themselves -- they returned to that labyrinth of angle brackets of TEI tags. 

Another problem is more interesting to me, which is when users ignore buttons.  Here we have one editor who's dealing with German charters, who uses these double-pipes instead of the line break tag, because this is what he was used to from print.  This speaks to something very interesting, which is that we have users who are used to their own formats, they're used to their own languages for mark-up, they're used to their own notations from print editions that they have either read or created themselves.  And by asking them to switch over to this style of tagging, we're asking them not just to learn something new, but also to abandon what they may already know.
And, frankly, it's really hard to figure out which buttons [to support].  Abigail Firey of the Carolingian Canon Law Project talks about how when they were designing their interface, they had 67 buttons.  This is very hard to navigate, and the users would just give up and start typing angle brackets instead, because buttons aren't a magic solution.
This is where Itinera Nova comes in.  The "intermediate notation" that Professor Thaller was talking about is quite clear-cut, and it maps well to the print notations that volunteers are already used to. 
And what's interesting about this is that what many people may not realize is that Itinera Nova--despite having a very clear, non-TEI interface--has full TEI under the hood.
Everything is persisted in this TEI database, so the kinds of complex analysis that we talked about earlier--not necessarily the Proust genetic editions, but this kind of thing--is possible with the data that's being created.  It's not idiosyncratic.
So as a result, I really think that in this, Itinera Nova points the way to the future.  Which is to abandon this idea that TEI is just for data entry, or that amateurs cannot do mark-up.  Both of those ideas are bogus!  Instead, let's say: use TEI for the data model; for the presentation, so we have these beautiful sliders.  And whatever else will get created out of the annotation tool, out of the transcription tool, let's use that for the data model and for the presentation.  But let's consider let's consider hooking up these--I don't want to say "easier"--but these more straightforward, these more traditional user interfaces [for transcription].

This is something that I think is really the way forward for crowdsourced transcription.  It is being done right now by the Papyrological Editor, it has been done by Itinera Nova for a long time.  And there are now some incipient projects to move forward with this.  One of these is a new project at the University of Maryland, Maryland Institute for Technology and the Humanities, the Skylark project, in which they are taking those same transcription tools that were used for Old Weather to allow people to mark up and transcribe portions of an image of a literary text that has been heavily annotated--like that Proust--to create data using the data model that can be viewed with tools like the Proust viewer.

So this is, I think, the technical contribution that Itinera Nova is making.  Obviously there are a lot more contributions--I mean I'm absolutely stunned by the interaction with the volunteer community that's happening here--but I'm staying on the technical track, so I'm not going to get into that. 


Are there any questions?  No?  Keep up the great work -- you folks are amazing.

Taking down exhibitions can bring us closer to the objects than building new ones (and create more fun)

Biomedicine on Display - Mon, 04/29/2013 - 09:00

I wrote the other day that taking down museum exhibitions could be as much fun as building new ones.

That was a pretty spontaneous tongue-in-cheek comment triggered by our conservator Nanna Gerdes’ enthusiastic twitter series of images (see @NaGerdes and storified here, here and here) from the process of taking down three old exhibition rooms in our museum’s Tietkens Gaard building.

But the more I think about it, I feel this spontaneous remark has some deeper truth to it. Here’s the way I reason about it:

Most curators will probably think the design and building of an exhibition is more fun than taking it down afterward. Especially if you are interested in ideas and concepts, and in constructing new unseen worlds.

Sure, it can surely be forbiddingly exhausting to design and build: conceptualising and physically constructing a new exhibition in the interfaces between history and the present, between images and material artefacts, immaterial ideas and three-dimensional physical spaces can at times be frustrating and anxiety-provoking.

But all in all it’s a pretty satisfying creative process. And I think it is this combination of hard work and immersion in creative processes that make us think of exhibition making as being ‘fun’.

And in contrast, the taking down an exhibition after closing day sounds, from an exhibition curator’s point of view, like a pretty dull and boring activity. The opposite of having fun. Like cleaning up after the party rather than planning and taking part in it.

However, I think there is another and more fun side to taking down than the immediate connotations of boredom, deconstruction and cleaning up.

Whereas the building and construction process has certain similarities with being on speed (especially in the last couple of weeks and days before the opening), the post-closing process is much more relaxed. If building up is associated with fervour, even hysteria, taking down is more characterised by tranquility, even melancholia.

Now, paradoxically, the creative and conceptual focus in the building phase draws the curator’s attention away from the artefacts themselves. When you build an exhibition you are 110% focused on how to find the right objects and images, and how to make them fit into the overarching theme of the show. You concentrate on the meaning of the artefacts — their history, their social context, their cultural significance, how they play together with other artefacts into a meaningful whole. The concept and the idea are sovereign, the artefacts its subjects.

After closing down, however, the conceptual frame is dead. The curator’s ordering mind has since long continued to other storage room hunting grounds. Now the remaining artefacts are no longer subjected to the powerful mind of the inquisitive and sovereign curator, they are no longer props in the curator’s script. And suddenly we can see them for what they are, as artefacts pure and simple.

So if you really want to see, smell, touch and contemplate artefacts, you’d better not get too involved in the constructive building up of a new exhibition, but rather wait until the last visitor has left the rooms and the catalogue has been removed from the shelves of the museum shop. When the show is over, the curator in the original sense of the word (the one who cares about artefacts) enters the scene and takes a renewed and more intense look at the artefacts.

That intense dealing with the artefacts can be pretty ‘fun’ too. My online dictionary defines ‘fun’ as “a source of enjoyment, amusement, or pleasure”, and that’s what a less hectic and conceptual dealing with artefacts can be: enjoyable, amusing, pleasurable, playful.

Actually, even if we talk about exhibition making as ‘fun’, there isn’t really much time for pleasure and play in the process. Deadlines must be met, budgets kept, many different wills must be negotiated, and conflicts avoided. That’s hectic fun. But packing the whole thing down afterwards gives us a chance to engage with the things in a more free and relaxed way: that’s playful fun.

And after all, that’s what fun is about, isn’t it?

The colour historians were here

Biomedicine on Display - Sun, 04/28/2013 - 08:39

We’ve had two specialists in colour history visting from the National Museum of Denmark.

They have worked hard grinding down selected areas of the walls and doors in the museum’s Titkens Gaard building to find out what colours the new exhibition room have had since the mid 18th century.

See also Nanna’s tweets here.

For larger images, click the photos below:

newsletter: month fifteen

Word's End: searching for the ineffable - Sun, 04/28/2013 - 01:25

Dear Nico,

Yesterday you turned fifteen months old. As unhappy as this last month has been for your city, and for so many other places, you remain cheerful and loving.

You speak in sentences. We woke up one slow Sunday morning, and you said, “dog say woof.” I said, “oh yeah? What do cats say?” You didn’t answer, but when I asked you where our cats were, you said, “I don’t know!” And fair enough: they aren’t allowed in the bedroom at night, so who knows where they go when they’re behind the closed door! Having said these things, you proceeded to get off the bed safely, butt first.

Physics is a lot better now. Bath time is more awesome for your ability to squeeze the squirt toys. You hold your own bottle, which took an inexplicably long time. You can stand from sitting, slide down a slide, and oh, walk without holding on. No big deal, just walking. This is me not freaking out.

Cognitively? Huge leaps. Earlier this month you got stuck under chairs, which was hilarious; no more of that. You’re way more into stuffed toys. You’ve figured out that feeding yourself may be messier, but is infinitely better. You’re starting to get the concept of “gentle” with cats, babies, and most of the time even my face. You’ve figured out that calling me when you wake from a nap, instead of bursting into tears, totally works to bring me to you.

No more falling asleep at the boob: you’ve started to ask to be put to bed. Settling down after that might be tricky, but is a necessary life skill, so we’re both giving you space to learn this. Plus, falling asleep without being held means you can put yourself back to sleep when you wake, sometimes.

Sometimes. Everything is variable. The variability, and the fact that you’re talking up a storm and I understand about 10% of it, means tricky times at the Launch Pad.

Latest food exploits: you’re very thoughtful about coconut curry. Canned sardines are awesome. Blueberries are the best, except muscat grapes are even better. Cheese makes you tremble with excitement (then you make faces while eating it). Today’s toast with tapenade, tomatoes and feta was a smashing success. Cupcakes and ice cream and cat food… oh, my.

You give slobbery kisses and enjoy a little post-nap back rub. You’re getting more clingy cuddly. Spring is finally here, and you’re loving it almost as much as you love dogs.

Don’t believe anyone who tells you those big feelings you’re having will go away. They never do. But you’ll get much better at handling them! You’ll have no choice: eventually it’ll be either that, or I sell you to the next traveling space circus that comes through town.

Love,
-Mama

ps Pix!

Taking down exhibitions is almost as fun as building them up

Biomedicine on Display - Sat, 04/27/2013 - 11:03

As I wrote in an earlier post, we are now on the track of building up the new semi-permanent exhibition ‘Under the Skin’ in the museum’s Tietkens Gaard building.

In the last couple of months, our conservator Nanna Gerdes has worked hard to take down the three former exhibition rooms and packed the artefacts for remote storage.

Judged by Nanna’s enthusiastic photographing activities, taking down the old exhibitions for storage seems to be almost as fun as building up new ones.

See Nanna’s storified twitter posts of the X-ray study collection with images here; ditto from the Finsen exhibition here, and ditto from the exhibition of anatomical models here.

.

(And here’s the ground plan of the rooms exhibition, ca.  25 x 12 meters in all:)

 

 

Human remains — constructing the ‘Under the Skin’-exhibition

Biomedicine on Display - Fri, 04/26/2013 - 10:34

We are now in the first phase of the construction of our new 3500 square feet semi-permanent exhibition here at Medical Museion — provisionally titled ‘Under the Skin’ — to be opened in the late autumn of 2014.

The exhibition will show some of best specimens from our big collection of normal and pathological anatomical specimens and other human remains, together with a number of new acquisitions from contemporary human remains, such as samples from bio- and tissue banks.

Already last year we secured the basic funding for the new exhibition from the Arbejdsmarkedets Feriefond (AFF), but until recently we’ve been waiting for the University of Copenhagen’s decision to redecorate the beautiful exhibition rooms in the mid-18th century Tietkens Gaard building.

Now the University has decided to start the redecoration and therefore we are now launching an exhibition site, where we tell about the successive phases of the construction process:

1) taking down the former study collections in the spring of 2013

2) clearing the rooms

3) the rebuilding and redecoration of the rooms in the next 6 months

4) the continuous development of the concept for the new exhibition and the preliminary design ideas

5) choosing and curating objects and images in the next 12 months; and finally

6) the mounting and installation in the early fall of 2014.

Our conservator Nanna Gerdes has tweeted her daily work taking down, packing and conserving the objects from the former study collections in the room (follow her here: @NaGerdes).

New thoughts and ideas for the exhibition project will be available through our blog and via Facebook.

Read more here: http://www.museion.ku.dk/under-huden-under-konstruktion/

The substance of fat – a multisensory event about fat

Biomedicine on Display - Thu, 04/25/2013 - 20:28

Want to explore fat with pencil and pastry fork?

We seem to live in a world obsessed with fat. Obesity is described as a worldwide health threat, and we are bombarded by diet advice. But fat itself is a mystery. While we know that “full fat” foods can be bad for us, we also know that the body needs fat (and of course, greasy food can be the most delicious). We often find fatty substances disgusting, but moisturize our skin with lotions based on lard and oil. And the kinds of bodies seen as beautiful oscillate wildly over time and media. It’s a love-hate relationship.

Last year we opened the exhibition “Obesity – what’s the problem?” here at Medical Museion. The exhibition takes a close look at the gastric bypass operation used to treat morbid obesity, and some intriguing recent research in metabolism. It’s all very scientific and clinical. But what about fat as a substance? How do we feel about it?

On Sunday 5 May we organise an afternoon event full of sensuous exploration of our love/hate relationship with fat. With London-based fine artist Lucy Lyons as our guide, we will feel, draw and eat our way through a world of fat. Also participating will be senior curator Bente Vinge Pedersen, Medical Museion, who is responsible for the exhibition ”Obesity – What’s the problem?”. Associate Professor Romain Barres, a specialist in human fat tissue and metabolism at the Novo Nordisk Foundation Center for Basic Metabolic Research (CBMR), University of Copenhagen, will help us explore what scientists know about the way that fat cells work.

The event takes place at Medical Museion, Bredgade 62, 1260 Copenhagen K on Sunday 5 May, 1-5 pm.

Tickets including entrance to the museum, coffee/tea and cake are on sale at Billetto, 75 DKK.

More info here: http://www.museion.ku.dk/the-substance-of-fat-a-multisensory-event.

Museums, Divided Attention, and Really Bad Commercials

Museum 2.0 - Wed, 04/24/2013 - 17:47


Ready for something ridiculous? Check out this inane AT&T commercial about a woman whose absorption in her smartphone is so great that Facebook updates become substantiated as pieces of art in the museum through which she strolls. It's like a bad public service announcement about the relationship between ADD, self-absorption, and psychosis.

It also suggests that for young people, masterpieces in museums are not nearly as interesting as a good friend's new haircut. And while I'm heartened by the fact that YouTube commenters were offended and dismayed by the commercial, I do think this commercial reflects common fears that museum-lovers have about younger generations and museums.

There are two fears at work here:

  1. People are so distracted by technology that they can't disconnect to pay attention to what's really important. 
  2. People are more interested in their own social lives and whatever is happening right now than in the big ideas, stories, and themes that have traditionally defined us as humans and communities. 


Both of these fears have some truth to them. People (of all ages) are making bad decisions because of technology rapture--whether that be texting while driving or spending more time with screens than with family members. And social media can promote a kind of narcissism in which each of us lives in a tiny bubble of friends' rants and raves.

These issues are important. But I feel that they are societal issues, not issues specific to museums or art institutions. I think this commercial could have just as easily been framed in another context that affords focus--work, a dinner party, playing sports. This kind of behavior is a violation of attention no matter where it happens. You could even argue that the commercial inartfully points to the ways that people map their own imagination onto museum artifacts. That it suggests that museums are sufficiently populist that people feel they don't have to check their interests and comfortable behaviors at the door. In some ways, this behavior is no more objectionable than people walking through a museum chatting about their personal lives and occasionally turning to engage with the art. It's just more visible, and offensive, because of the device-mediation.

Many people feel that museums are sacred spaces for a particular kind of attentive experience, and that it would be better if people understood and valued the specialness of that experience. I agree. But I think we have to earn it. We have to help people make connections to the power of artistic mastery, scientific discovery, and historical leadership in ways that push people out of the everyday. We have to provide the interpretation, the linkages, and the sparks that bring people into meaningful engagement with our artifacts and stories.

Visitors don't want to see their own lives on the wall. But they DO want to see reflections, expansions, and distortions of their experiences in ways that allow them to form new connections. That's what compelling relevance is about. It's not pandering. It's bridging.

Metric driven Agile for Big Data

Data Mining - Sat, 04/20/2013 - 21:43

Working in Bing Local Search brings together a number of interesting challenges.

Firstly, we are in a moderately sized organization, which means that our org chart has some rough similarities to our high level system architecture. This means that we have back-end teams who worry mostly about data - getting it, improving it and shipping it. These teams are not sitting in the end-users laps and our customers, to some extent, are internal.

Secondly, we are dealing with 'big data'. I don't consider local as it is traditionally implemented to be a big data problem per se, however when one starts to consider processing user behaviour and web scale data to improve the product it does turn in to a big data problem.

Agile (or eXtreme programming) brings certain key concepts. These include a limited time horizon for planning (allowing issues to be addressed in a short time frame and limiting the impact of taking a wrong turn and the 'on-site customer'.

The product of a data team in the context of a product like local search is somewhat specialized within the broader scope of 'big data'. Data is our product (we create a model of a specific part of the real world - those places where you can peform on-site transactions), and we leverage large scale data assets to make that data product better.

The agile framework uses the limited time horizon (the 'sprint' or 'iteration') to ensure that unknowns are reduced appropriately and that real work is done in a manner aligned with what the customer wants. The unknowns are often related to either the customer (who generally doesn't really know what they want), technologies (candidate solutions need to be tested for feasibility) and the team (how much work can they actually get done in a sprint). Having attended a variety of scrum / agile / eXtreme training events, I am now of the opinion that the key unknown of big data - the unknowns in the data itself - are generally not considered in the framework (quite possibly because this approach to engineering took off long before large scale data was a thing).

In a number of projects where we are being agile, we have modified the framework with a couple of new elements.

Metrics not Customers: we develop key metrics that guide our decision making process, rather than rely on a customer. Developing metrics is actually challenging. Firstly, they need to be a proxy for some customer. As our down stream customers are also challenged by the big data fog (they aren't quite sure what they will find in the data they want us to produce for them), we have to work with them to come up with proxy metrics which will guide our work without incurring the cost of doing end to end experimentation at every step. In addition, metrics are expensive - rigorously executing and delivering measurements is a skill required of second generation big data scientists.

The Data Wallow: While I'm not yet happy with this name, the basic concept is that in addition to the standard meetings and behaviours of agile engineering, we have the teams spend scheduled time together walling in the data. The purpose of this is two fold: firstly, it is vital that a data team be intimate with the data they are working with and the data products they are producing - the wallow provides shared data accountability. Secondly, you simply don't know what you will find in the data and how it will impact your design and planning decisions. The wallow provides a team experience which will directly impact sprint / iteration planning.

Related articles 5 Hidden Skills for Big Data Scientists

Seeking Clarity about the Complementary Nature of Social Work and the Arts

Museum 2.0 - Wed, 04/17/2013 - 18:56
When we talk about museums or cultural institutions as vehicles of social and civic change, what does that really mean? Last week I had a conversation that changed my perspective on this question.

I was with two close friends who work in social service organizations focusing on homelessness and criminal justice respectively. We all work for nonprofits. We all care about making a difference in our community. And we each have specific interests in increasing access, connection, and empowerment of marginalized people.

But when you switch from the "why" to the "what" of our work, the similarities end. Here are some of the big differences we noticed:
  • Their work involves life-or-death situations. Museum work is mostly non-contact. The consequences of risk-taking and experimentation are incredibly different.
  • There is infinite demand for their services, whereas we struggle to generate demand for ours. There will never be enough meals for hungry people or mental health facilities for those who need them. Meanwhile, arts industry leaders worry about "oversupply" of organizations in the face of dwindling demand. 
  • Social service providers often find themselves working in a reactive stance to unexpected incidents. Arts organizations can operate on their own timelines and internal values. Those that want to be more relevant often have to push themselves to be work responsively to events outside their domain.
These differences made me realize that even as we talk about arts organizations as vehicles for civic engagement or social change, we have the opportunity (and the necessity) to think of our work in a distinct way. This may sound obvious, but the rhetoric about cultural organizations working in the social sphere often ignores our inherent differences. We champion a historic house museum for hosting a soup kitchen, a children's museum for tackling family wellness in low-income housing, or an arts organization for writing poems with convicts. We talk about these projects as if they were analogous to the work being done by a social service agency, and we wonder where the line between cultural and social work blurs.

This is the wrong analogy and the wrong question. Instead of asking whether we are focusing too little or too much of our attention on social work, we should be asking HOW we can approach the work of community development in a distinctive way.

Looking back at the bulleted list above, every one of the differences between arts organizations and social service organizations presents an opportunity for us to do really interesting, specific work. We CAN take risks with more flexibility than social service agencies. We CAN devote some of our resources to reaching communities with incredible demands. We CAN develop programs that are visionary and unusual because we are not wading in crises to which we must respond.

When the Jane Addams Hull-House Museum hosts a monthly soup kitchen, they are doing it to open up conversations about social justice around food. When the Boston Children's Museum initiated the GoKids wellness program, they did it to empower families to co-create meaningful shared experiences that emphasize health. When my museum brings together homeless and non-homeless volunteers to restore a historic cemetery, we do it to encourage people in our community to look at history and each other with respect. I admire all of these projects, and I also acknowledge that they achieve different goals by different means than social service agencies do.

Cultural organizations have the luxury to do work that supports community development in ways that are more creative, experimental, and yes--supplemental--than social service organizations. The very fact that the work we do is "extra" shouldn't be a downside. We're doing it because we have the unique capacity to do so. We're doing it because we care. We're doing it because that's what "adding value" means.

A Looming Disaster for History (II)

edwired - Tue, 04/16/2013 - 15:30

As a follow up to my previous post about history’s gender problem, I now want to offer some possible solutions for our discipline. Before I do, however, a bit more context on the gender problem History has here at George Mason seems warranted. Of the undergraduate programs in our college with more than 100 declared majors, only three have enrollments where fewer than two-thirds of those declared majors are female — History (40%), Government (41%), and Economics (34%). Every other substantially enrolled major in our college is more female than the university average of 62%.

Further, our MA enrollments are similarly skewed. Overall MA enrollments in the College of Humanities and Social Sciences are 60% female, but in History, MA enrollments are only 42% female. Thus, the problem I identified in my previous post extends beyond the undergraduate years into the MA. Given what Rob Townsend has written for the American Historical Association, I suspect we are very typical of history departments nationwide.

What then can be done to deal with history’s gender problem (and not just at George Mason)?

Too often, the standard answers to this sort of gender problem in an academic discipline are to increase the number of female faculty and/or to teach more courses that will appeal to female students. To my mind, the first of these is pretty obvious and needs constant attention. Even in a department that is changing rapidly, only 40% of the tenure track faculty in History here at Mason are female, so further attention to finding a full gender balance is something we’ll need to continue to work on. But it’s the second of those proposed solutions that I think is off the mark.

First of all, such phrasing assumes that male and female students can’t or won’t be interested in the same things about history, and second, it tends to turn on simplistic notions about preferences, such as male students want military history (and women don’t) and/or female students want women’s history (and men don’t). While I think information about student preferences for course content is important, the problem is more complex than simply offering a few more of this or a few less of that type of course.

Instead, I think the problem seems to lie in the way history is taught and in the ways we conceive of and describe to students what they might do with their degrees in history. One of the most important reasons I say “seems to” here is that there is very little in the way of solid data on the role that gender plays in the choice of major in college, and what little data exist tend to be focused on the much greater gender gap in the STEM fields.

Nevertheless, it is possible to glean some useful information from some of the STEM-focused studies. For instance, in a 2009 report by Basit Zafar, an economist at the Federal Reserve Bank of New York (“College Major Choice and the Gender Gap“), offers some very interesting data on the role gender plays in the choice of major. Zafar’s study was limited to students at Northwestern University and so does not pretend to be broadly predictive. However, it does offer a very rigorous analysis of data. Zafar concludes that gender differences in major choice between men and women are not based on expectations of future income, nor are they explained by differential levels of confidence in one’s academic abilities, nor (for those with US born parents) do beliefs about the status of a future job resulting from a major play an important role in the choice of major.

Instead, Zafar concludes that for those with US born parents the most important factor in the choice of major is the degree to which one expects to enjoy the coursework and the degree to which one expects to enjoy a future career tied to that major, with female students having a much greater concern for these two factors than male students (pages 25-28). For those with foreign born parents, whether male or female, perceptions of the status of the major and the status of jobs that might result from that major play a more important role for both male and female students, but especially for male students (20).

Assuming for a minute that Zafar’s data could be replicated across a much broader sample of students, then we need to think very carefully about the ways we teach about the past. Ask a group of graduating history majors how much diversity there was in the teaching methodologies they experienced in their history courses and I think it’s a safe bet that they will say, “not much.” The vast majority of history classes follow a general lecture-plus model in which professors mostly lecture with some discussion time thrown in daily or weekly. At some point this style of teaching has to become boring, no matter how good the professor is at delivering it.

We also need to think very carefully about the ways we talk about careers our students might pursue after graduation. As the digital economy rolls over us, the work our students will be doing after graduation is increasingly very different from the work they might have done five or ten years ago, but by and large our descriptions of that work remain the same, rooted in a series of generalized notions about what one might do with a liberal arts degree. It’s time for us to get much more specific about the jobs our students are getting/will get in the new economic reality they’ll be living in.

Which brings me to my final point — these two considerations do not exist in isolation from one another. Instead, they are inextricably linked. One way to increase the levels of enjoyment our students experience (or expect to experience) is to begin creating courses that break the lecture-plus model and begin to incorporate project work, service learning, and other forms of “doing history.” Rather than continuing to talk to them or with them about the past, it’s time to develop courses that get them into the field, into the archives, into employment sites, at museums or historic sites, in short, give them a chance to exercise their creative energies. One more great lecture or one more well thought out five page essay assignment just isn’t going to do that.

Examples of what I’m talking about exist all over the country, but they are the exceptional courses in history curricula. If we are going to take seriously the notion that our gender problem — which is very real — needs to be addressed, then it’s time for a national conversation about how changing our curriculum is the way to address that problem.

numerals, and language with an infinite lexicon

Obscure and Confused Ideas - Tue, 04/16/2013 - 12:31
Here is an idea that I've seen in several places: "it is plain that if a language is to be learnable, the number of basic significant elements (words) has to be finite" (Tim Crane, The Mechanical Mind, 2nd ed. p.140).  (Is the locus classicus for this Davidson's "Theories of Meaning and Learnable Languages?")

But how does this square with numerals (words for numbers)?  There are an infinite number of numerals.  And arithmetical language is learnable. 

So how could/should Crane, Davidson, et al. deal with this apparent counterexample?  I'm not sure... perhaps (despite appearences) numerals are  not themselves genuine words; rather only the digits are genuine words, and the higher numerals are complex supra-word symbols. 

Changes at UCLDH

Melissa Terras' Blog - Mon, 04/15/2013 - 13:52
We’re going into our fourth year at UCL Centre for Digital Humanities, and there have been quite a few changes along the way. Since the centre was founded under the direction of Professor Claire Warwick, Claire has also taken on Head of Department in UCL Department of Information Studies, as well as Vice Dean of Research for the Arts and Humanities faculty. Over the past year, Claire and I have been co-directing the centre. I’m pleased, proud, and a little bit nervous to say that from now on I’ll be taking on full operational duties as Director of UCLDH, still working closely with Claire, who remains committed to Digital Humanities as a subject, and UCLDH in particular. I’d like to take this opportunity to thank Claire for her continued input into UCLDH – and I look forward to working with her in this slightly different capacity over the next few years, as well as the rest of the team at UCLDH, and putting my efforts into building up UCLDH even further after its great start.

Onwards! 

A Looming Disaster for History

edwired - Fri, 04/12/2013 - 14:32

In the April issue of Perspectives, Rob Townsend offers what is perhaps his last analytical article for the American Historical Association’s monthly newsletter (Rob has moved on from the AHA to a new job): “Data Show a Decline in History Majors.”

From the title of this post, you might be inclined to think that I’m worried that a decline in history majors is the looming disaster for history departments around the country. If only it were that simple. You see, undergraduate history programs don’t have an enrollment problem. We have a gender problem.

According to the National Center for Educational Statistics, in 2010 just under 57% of all undergraduate students at 4-year non-profit institutions of higher education were female and the data for degrees conferred are similar. According to Rob’s article, fewer than 41% of the BA degree recipients in history departments were female in 2011. Our data here at George Mason are even worse. Female history majors represent only 40% of our total at an institution where 62% of our undergraduate students are female.

That yawning gap between overall undergraduate enrollments and history enrollments is the size of our gender problem.

The problem is bad enough on its own to require us to take action as a profession. In addition to the obvious need to do something about the relatively low popularity of history as a discipline among undergraduate women, we also need to fix this problem for pragmatic reasons. As has been reported widely over the past several years, institutions of higher education are increasingly enrollment driven. This isn’t news to private institutions who have been living and dying by their enrollment numbers for years. But it is a new experience for many public institutions, who only in the past decade or so have been learning what it’s like to live or die by the same data. In this fiscal environment, if we don’t fix our gender problem soon, history departments all across the country should expect to see tenure lines and other important resources shifting to departments with more robust enrollments — enrollments that will only be robust with large numbers of female students.

What is to be done? None of the answers are simple or obvious and there is certainly no silver bullet that could solve our gender problem in undergraduate history education. Instead, I think it is high time we embark on a sustained conversation about change in undergraduate history education — including changes that will make our discipline just as appealing as other majors are to the largest segment of the undergraduate enrollment on our campuses.

The alternative is to decide that history is doomed to be an ever smaller part of the undergraduate enterprise. I believe that if we really commit ourselves to doing something about our gender problem, we can and will find ways to change for the better. But we need to commit. And soon.

Is Digitizing Historical Texts a Bad Idea (II)?

edwired - Thu, 04/11/2013 - 13:26

My previous post about digital historical text generated some very interesting comments, both here and on Twitter. I met with my students again last night and we had an extended discussion about those discussions, so thanks to everyone who chimed in. What follows is a summary, more or less, of our conversation last night.

We were particularly taken by Steve Ramsey’s critique of my post, especially the following paragraph:

If so, your problem clearly necessitates access to the original work. But if you are concerned merely to read it, it seems to me very hard to argue against a digital copy. And the truth is that even digital copies can rival the originals for problems that apparently involve the “thingness” of the thing. Scans of the Beowulf manuscript — which no responsible scholar should ever touch — are of such density that one can see the hills and valleys of the vellum. I’m unable to imagine what it is about scans of the War Papers that make the original “disappear from view” or resistant to prioritization as historical sources. Are you prepared to argue that Spencerian handwriting moves documents up and down the hierarchy of importance?

None of us was arguing that digitizing texts was, in and of itself, bad. We all agreed that access to the content of those texts was an unqualified good. And I’ve gone back into the original post and clarified my language about the War Department project, because the way I wrote one sentence made it sound as though I was unhappy with the scans of the documents (which are copies of the originals due to a fire that destroyed the originals — see the project page for more on this issue).

Nevertheless, we all agreed that as historians, we care about the “thingness” of the source, and we care a lot about that. Not because of some “thinly veiled nostalgia” for the thing itself, but because texts are both texts and historical artifacts and so students of the past need access to that thingness if they are to understand both aspects of the source — it’s content and its materiality.

The importance of the text itself is pretty obvious and so doesn’t need clarification. But the materiality does. We discussed, for instance, the problems posed in teaching using historical newspapers via a database like ProQuest Historical Newspapers. The ProQuest search delivers the story requested abstracted from the page that it appeared on. The full page is available as well, but unless students are taught what a newspaper is, how the arrangement of content on the page and its placement in a section is the result of a dynamic process involving editors, writers, and layout staff, they will have no sense for why the placement of that story matters sometimes as much as the content of the story itself. “Above the fold” or “below the fold” become meaningless when a database serves up only the story.

ProQuest at least returns a pdf of the original story, so students can see the type face and (often but not always) the images that went along with the story. And they can examine the headline and consider why a headline might be more sensational than the content of the story warrants — again, as a result of that dynamic process involving several actors I just described.

As for the hierarchy we assign to sources, we also agreed that sometimes we might just assign a different importance to a source based on things other than the words in the text — that all sorts of other factors, most of them material, might convince us that this or that source was of greater import. Knowing everything about the source — not just its words, but the marginalia, its placement in a collection, or where it was found — can all shed potentially important light on what the source means and meant to others at the time it was created or later.

Given all of that, we wanted some sort of best practices for digitizers, that included common standards about such things as images of the original to go with the plain text on a white screen. As Sarah Werner wrote in her comment, creating such standards will require historians, bibliographers, archivists, and technologists to get together and discuss, among other things, what they (and our students) aren’t seeing when all we get is black pixels on a white screen.

123456789next ›last »

Echo is a project of the Center for History and New Media, George Mason University
© Copyright 2008 Center for History and New Media