The Globe and Mail had a very interesting article on how Twitter hands your data to the highest bidder, but not to you. The article talks about how Twitter is archiving your data, selling it, but not letting you access your old tweets. The article mentions that DataSift is one company that has been licensed to mine the Twitter archives. DataSift presents itself as the “the world’s most powerful and scalable platform for managing large volumes of information from a variety of social data sources.” In effect they do real-time text analysis for industry. Here is what they say in What we do:
DataSift offers the most powerful and sophisticated tools for extracting value from Social Data. The amount of content that Internet users are creating and sharing through Social Media is exploding. DataSift offers the best tools for collecting, filtering and analyzing this data.
Social Data is more complicated to process and analyze because it is unstructured. DataSift’s platform has been built specifically to process large volumes of this unstructured data and derive value from it.
One thing that DataSift has is a curation language called CDSL (Curated Stream Definition Language) for querying the cloud of data they gather. The provide an example of what you can with it:
Here’s an example, just for illustration, of a complex filter that you could build with only four lines of CSDL code: imagine that you want to look at information from Twitter that mentions the iPad. Suppose you want to include content written in English or Spanish but exclude any other languages, select only content written within 100 kilometers of New York City, and exclude Tweets that have been retweeted fewer than five times. You can write that in just four lines of CSDL!
It would be interesting to develop an academic alternative similar to Archive-It, but for real-time social media tracking.
The latest version of our Old Bailey Datawarehousing Interface is up. This was the Digging Into Data project that got TAPoR, Zotero and Old Bailey working together. One of the things we built was an advanced visualization environment for the Old Bailey. This was programmed by John Simpson following ideas from Joerg Sanders. Milena Radzikowska did the interface design work and I wrote emails.
One feature we have added is the broaDHcast widget that allows projects like Criminal Intent to share announcements. This was inspired partly by the issues of keeping distributed projects like TAPoR, Zotero and Old Bailey informed.
The GRAND group has a work being exhibited at the InSight: Visualizing Health Humanities show that starts tonight. We used Unity to create a FPS (First Person Shooter) type of game for medical communication. The game, called CatHETR, lets players move through a ward dealing with communicative situations. This project was supported by the GRAND Network of Centres of Excellence.
The Guardian has a nice short story by Keza MacDonald that asks Are gamers really sexist?. It doesn’t really answer the question or propose solutions, but it documents again how people who speak out against sexist language get harassed.
As I mentioned in my post on the GRAND conference, Ken Perlin showed a number of interesting Java apps that illustrated visual ideas. One was a Interactive Map of Pride and Prejudice. This interactive map is a rich prospect of the whole text which you can move around to see particular parts. You can search for words (or strings) and see where they appear in the text. You can select some text and it searches. The interface is simple and intuitive. You can see how Perlin talks about it in his blog. I also recommend you look at his other experiments.
Last week I was at the GRAND 2012 conference. GRAND (Graphics, Animation, and New Media) is a Networks of Centres of Excellence that brings together people across disciplines and across the country around gaming, new media and so on. You can see my GRAND 2012 conference notes here.
This year we had two of the best keynotes of any conference I have been to. Valerie Steeves talked about her research into parents and youth on the internet. The change in attitudes of both parents and youth to the internet between 2000 and today was dramatic. Ken Perlin was the closing keynote and he showed Java apps that he wrote as experiments. It made me want to learn to program in Java just to have as much fun as he was having.
From Humanist, a link to an article on online education, The X Factor (in the Brainstorm blog of the Chronicle of Higher Education. The post talks about how Harvard University has joined with MIT to create edX, an online education consortium. Harvard is now joining the MOOC (Massive Online Open Courses) bandwagon pioneered by some Stanford profs who opened their courses to thousands. The author, Kevin Carey, points out that edX won’t compete with MIT or Harvard, but with other online providers and with less prestigious institutions.
I worry we are going to see a lessening of educational diversity. I worry that the star quality of MIT, Harvard and Stanford will drive out less prestigious players leaving us with a small number of online courses. Fewer instructors for more people will mean more standardization of education and less diversity.
The New York Times has a Room for Debate on this, Got a Computer? Get a Degree with different reactions to the news. Most seem positive, but few feel that certificates for taking MOOCs are comparable to real course credit.
I’ve been meaning to blog on the video circulating of Kurt Vonnegut talking about the Shape of Stories. He describes the curves followed by popular stories like “boy meets girl” and suggests computers could even understand such simple curves. In Lapham’s Quarterly you can read the text of this lecture with illustrations. See Kurt Vonnegut at the Blackboard. In this version he asks about the value of such systems, a question which could apply equally to computer generated visualization,
The question is, does this system I’ve devised help us in the evaluation of literature? Perhaps a real masterpiece cannot be crucified on a cross of this design. How about Hamlet?
He concludes that the system doesn’t work because the truth is ambiguous. We simply don’t know in complex works (like Hamlet) if news is good or bad. Good literature is open to interpretation.
But there’s a reason we recognize Hamlet as a masterpiece: it’s that Shakespeare told us the truth, and people so rarely tell us the truth in this rise and fall here [indicates blackboard]. The truth is, we know so little about life, we don’t really know what the good news is and what the bad news is.
Many have noticed this amusing play on visualization including an infographic on Visua.ly, Kurt Vonnegut on the Shapes of Stories:
Prism is the coolest idea I have come across in a long time. Coming from the University of Virginia Scholar’s Lab, Prism is a collaborative interpretation environment. Someone comes up with categories like “Rhetoric”, “Orientalism” and “Social Darwinism” for a text like Notes on the State of Virginia. Then people (with accounts, which you can get freely) go through and mark passages. This creates overlapping interpretative markup of the sort you used to get with COCOA in TACT, but unlike TACT, many people can do the interpretation – it can be crowdsourced.
They are planning some visualizations of the results including what look like the types of visualizations that TACT gave where you can see words distributed over tagged areas.
Bethany Nowviskie explains the background to the project in this Scholar’s Lab post.
Jeff sent me a link to the beta TED Ed site where you can see how they are turning TED videos (and other animations) into simple lessons that we can use. See TED-Ed: Lessons Worth Sharing. The idea is that an instructor can reuse (flip) a video with their own questions and commentary. You can also use the framework with YouTube videos. Neat.
A nice story from the New York Times by Michael Winerip, Robo-Readers Used to Grade Test Essays (April 22, 2012) talks automated essay scoring software (AES). The story first reports a study from the University of Akron that showed that AES software is comparable to human graders (see A Win for the Robo-Readers by Steve Kolowich from Inside Higher Ed.) The NYT story goes then to report how Les Perelman, a director of writing at MIT, has shown how you can game AES tools. Among other things they don’t check facts or truth so you can write all sorts of outrageous things and still get a good score from AES. The story discusses some of the patterns that get good scores like lexical variety and long sentences. The story ends with the possibility that AES could be matched by essay writing software,
Two former students who are computer science majors told him (Perelman) that they could design an Android app to generate essays that would receive 6’s from e-Rater. He says the nice thing about that is that smartphones would be able to submit essays directly to computer graders, and humans wouldn’t have to get involved.
Particularly interesting is an essay Perelman wrote to show how poor essays can game the system. I wish I could say that I never saw writing like this and that therefore there was no danger of AES systems rewarding the poor writing found in real essays,
In today’s society, college is ambiguous. We need it to live, but we also need it to love. Moreover, without college most of the world’s learning would be egregious. College, however, has myriad costs. One of the most important issues facing the world is how to reduce college costs. Some have argued that college costs are due to the luxuries students now expect. Others have argued that the costs are a result of athletics. In reality, high college costs are the result of excessive pay for teaching assistants.
From Slashdot a story about how the Faculty Advisory Council to the Library (of Harvard) sent around a Memorandum on Journal Pricing arguing that periodical subscriptions are not sustainable and that faculty should therefore publishing in open-access journals.
The Faculty Advisory Council to the Library, representing university faculty in all schools and in consultation with the Harvard Library leadership, reached this conclusion: major periodical subscriptions, especially to electronic journals published by historically key providers, cannot be sustained: continuing these subscriptions on their current footing is financially untenable. Doing so would seriously erode collection efforts in many other areas, already compromised.
According to National Security Agency (of the USA) whistleblower William Binney, the NSA probably has most of our email. See the video Whistleblower: The NSA is Lying–U.S. Government Has Copies of Most of Your Emails. The question then is what they are doing with it? He mentions that the email can be “put it into forms of graphing, which is building relationships or social networks for everybody, and then you watch it over time, you can build up knowledge about everyone in the country.” (see transcript on page). In other words they could (are) building a large social graph that they can use in various ways.
In the transcript of the longer video Binney talks about various programs developed to filter out all the information:
Well, it was called Thin Thread. I mean, Thin Thread was our—a test program that we set up to do that. By the way, I viewed it as we never had enough data, OK? We never got enough. It was never enough for us to work at, because I looked at velocity, variety and volume as all positive things. Volume meant you got more about your target. Velocity meant you got it faster. Variety meant you got more aspects. These were all positive things. All we had to do was to devise a way to use and utilize all of those inputs and be able to make sense of them, which is what we did.
Binney goes on to talk about the code named Stellar Wind program that Bush authorized and then was forced to change after a revolt of some sort in the Justice Department in 2004. Stories tell of senior Bush advisors trying to get Ashcroft to sign authorization papers for the program while he was in the hospital. As for Stellar Wind, it seems to be mostly about metadata – the date, to, and from of emails that you could use to build a diachronic social graph which is what Binney was talking about. Strictly speaking this would be social network analysis rather than text analysis, but they might have supplemented the system with some keyword capabilities. Another story from Time points out the problem with such analysis – that it generates too many vague false positives. “Leads from the Stellar Wind program were so vague and voluminous that field agents called them “Pizza Hut cases” — ostensibly suspicious calls that turned out to be takeout food orders.”
Either way, these hints give us a tantalizing view into how text and network analysis is being experimented with. Are there any useful research applications?
I have been working for a while on archiving the Globalization Compendium which I worked on. Yesterday I got it archived in two Institutional Repositories:
In both cases there is a Zip of a BagIt bag with the XML files, code and other documentation from the site. My first major deposit.
Daniel sent the link to this YouTube video, A walk through The Waste Land, that shows an iPad edition of The Waste Land developed by Touch Press. The version has the text, audio readings by various people, a video of a performance, the manuscripts, notes and photos. I was struck by how this extends to the iPad the experiments of the late 1980s and 1990s that exploded with the availability of HyperCard, Macromedia Director and CD-ROM. The most active publisher was Voyager that remediated books and documentaries to create interactive works like Poetry in Motion (Vimeo demo of CD) or the expanded book series, but all sorts of educational materials were also being created that never got published. As a parent I was especially aware of the availability of titles as I was buying them for my kids (who, frankly, ignored them.) Dr. Seuss ABC was one of the more effective remediations. Kids (and parents) could click on anything on the screen and entertaining animations would reinforce the alphabet.
What happened to all that activity? What happened to all those titles? To some extent they never went away, it is just that attention turned to the web as a means of delivery. The web changed the economics which then changed the design. CD-ROMs could be sold and people (like me) were willing to pay for professional titles. But, it was hard to sell access to web materials when there is so much free stuff out and an expectation of free access. Thus companies changed what they sold when adapting to the web. Web sites were built that were free and promoted the print books like Seussville. These offered supplementary activities and in some cases monetized eyeballs with advertising, but they did not give away free interactive book experiences. Now the iPad (or, to be accurate, the App Store) has brought back a viable economic model where people can buy interactive books.
With Apple’s latest announcement of iBooks textbooks and iBooks Author, the attention is back on interactive books. Apple is clearly trying to change the economics of textbooks and how they are consumed. They want schools to move to iPads and kids to get interactive textbooks from publishers and authors who use iBooks Author to remediate books. Whether Apple sews up the market or we get a more open model, there is a lot to be said for (and against) moving away from print for textbooks.
To get a sense of what the new interactive books might look like there is an interesting demo in a Ted talk by Mike Matas: A next-generatioin digital book. He demos Al Gore Our Choice published by Push Pop Press. From the demo this looks like a book with a bunch of video and info-graphics tacked on. I don’t see a compelling reason for getting the interactive version of the book. In the case of the iPad “The Waste Land” they have used multimedia to thoroughly enhance the poem with readings and scholarship that could actually change your perception of the poem. In this case it seems like a multimedia supplement that just reinforces the content. The Ted talk ends with a hokey interactive where you blow into the iPad or iPhone to animate a graphic.
To be honest, I haven’t played with either one, just watched the demos. “Our Choice” could, as Al Gore says in his Guided Tour, use interactive infographics in ways that really let you understand the data differently. I also like the pinching and folding interaction they have pioneered for picking things up. The larger question is where are interactive books going? Will Apple convince schools and publishers to move to interactive textbooks? Will kids end up carrying around both heavy print texts and iPads or will the shift be complete at the expense of many texts? Personally I still buy print books of things I expect to want to consult over time even when there is an electronic version. Print books I only have to buy once (and then move in boxes forever). Electronic versions I have buy again and again as media like CD-ROMs go out of fashion, operating systems change, and viewing devices morph. Books are designed to last a lifetime, electronic media are obsolete before you finish walking through the wasteland.
Susan pointed me to Leximancer which is a commercial text analysis tool that creates mind maps of your information. I’m struck by how compelling people find mind maps.
Leximancer enables you to navigate the complexity of text in a uniquely automated fashion. Our software identifies ‘Concepts’ within the text – not merely keywords but focused clusters of related, defining terms as conceptualised by the Author. Not according to a predefined dictionary or thesaurus.
The Concepts are presented in a compelling, interactive display so that you can clearly visualise and interrogate their inter-connectedness and co-occurrence – which is as important as the Concepts themselves – right down to the original text that spawned them.
The Guardian has a great series on the Battle for the internet. This includes a number of interventions by Tim Berners-Lee including Tim Berners-Lee urges government to stop the snooping bill and Tim Berners-Lee: demand your data from Google and Facebook. There is an article, Web freedom faces greatest threat ever, warns Google’s Sergey Brin, about the dangers of walled gardens like FaceBook and Apple’s App Store. One might say the same about Google.
I just got a complementary copy of La macchina nel tempo: Studi dei informatica umanistica in onore di Tito Orlandi (The Time Machine: Studies in humanities computing in honour of Tito Orlandi) which I blogged about before. This got me wondering how much of Prof. Tito Orlandi’s writings are available online and what his legacy is. It turns out that Orlandi has put together a list of his publications with links to online versions where possible. There are even some in English like the excellent Is Humanities Computing a Discipline?
But how might one summarize Orlandi’s contribution? In his prefatory “Controcanto,” one of the editors of The Time Machine, Domenico Fiormonte, writes about first encountering Orlandi in a bunker where Fiormonte then spent a summer. During that summer he learned 3 things:
These three lessons seem about as good a starting place for the digital humanities as any. They also suggest some of what Tito Orlandi was interested in, namely formalization, redefinition, and interpretation. But surveying Orlandi’s writings, using the list of digital humanities publications from his personal site, you can see other themes. He believed that we needed to develop the theoretical foundations of humanities computing and that we should do that from the mathematical model of the computer, not how it works practically. (See Informatica, Formalizzazione e Discipline Umanistiche (in Italian.)) He believed that would help us understand how one can model culture on a computer. He discussed the importance of modelling before Willard McCarty did in Humanities Computing – something that should be recognized out of fairness to the pioneering work of Italian digital humanists since Busa.
Reading Orlandi and about Orlandi I also sense an impatience with those that follow him. This is what he writes in an unpublished talk given in London in 2000. He is talking about discussions by other scholars on the digital humanities.
I feel a sense of inadequateness, even disorder, in the overall change as presented by the same scholars. In fact, when they proceed to propose a definition of humanities computing, they tend to consider the products of computation, be they hardware (the Net) or software (applications like concordance programs or statistical packages), rather than the first principles of computing.
Orlandi wanted to ground the digital humanities in mathematics – a language common to informatics, science and potentially the digital humanities. That the digital humanities wandered off into hypertext, new media and so on seems to have annoyed him. He was also irritated that ideas he had been teaching and writing about for years were being ignored in the English-speaking world. Take a look at The Scholarly Environment of Humanities Computing: A Reaction to Willard McCarty’s talk on The computational transformation of the humanities. This web page discusses an outburst of his at a paper by McCarty with what Orlandi felt were ideas he had been discussing for a decade at least. It is instructive how he sets aside his pride to get at the issues that matter. He might be irritated, but he also wants to use this to reflect on more important issues.
Perilli and Fiormonte have done a great job bringing together a festschrift in honour of Orlandi. The Time Machine isn’t really about Orlandi’s thought so much as about his legacy in Italy. What we need now is for his foundational works to be translated and a retrospective interpretation of his contributions.
An article in the New York Times led me to the Google Art Project. This project doesn’t feel like a Google project, perhaps because it uses an off-black background and the interface is complex. The project brings together art work and virtual tours of many of the worlds important museums (but not all.0 You can browse by collection, artist (by first name), artworks, and user galleries. You can change the language of the interface (and it seems to change even when you don’t want it to in certain circumstances.) When viewing a gallery you can get a wall of paintings or a street view virtual tour of the gallery. Above you see the “Museum View” of a room in the Uffizi with a barrier around a Filippino Lippi that is being treated for a woodworm infestation! In the Museum View you can pan around and move up to paintings much as you would in Google Maps in Street View. On the left is a floor plan that you can also use.
This site reminds me of what was one of the best multimedia CD-ROMs ever, the Musee d’Orsay: Virtual Visit. This used QuickTime VR to provide a virtual tour. It had the sliding walls of art. It also had special guides and some nice comparison tools that let you get a sense of the size of a work of art. The Google Art Project feels loosely copied from this right down to the colour scheme. It will be interesting to see if the Google Art Project subsumes individual museum sites or consortia like the Art Museum Image Consortium (Amico.)
I find it interesting how Google is developing specialized interfaces for more and more domains. The other day I was Googling for movies in Edmonton and found myself on a movies – Google Search page that arranges information conveniently. The standard search interface is adapting.