It won’t be long (one month, actually) before Teaching History in the Digital Age is available. But the cover has now appeared on the Michigan Press website and I’m very pleased with the result.
On February 23, I was honored to speak at an Invited Symposium on Digital Humanities at the American Philosophical Association’s Central Division Meeting in New Orleans. Organized by Cameron Buckner, who is a Founding Project Member of InPhO and one of the leaders of the University of Houston’s Digital Humanities Initiative, the session also featured great talks by Tony Beavers on computational philosophy and David Bourget on PhilPapers.
One of the central questions that we explored was why philosophy seems to be less visibly engaged in digital humanities; as Peter Bradley once wondered, “Where Are the Philosophers?” As I noted in my talk, the NEH’s Office of Digital Humanities has only awarded 5 grants in philosophy (4 out of 5 to Colin Allen and colleagues on the InPhO project). Although the APA conference was much smaller than MLA or AHA, I was still surprised that there seemed to be only two sessions on DH, compared to 66 at MLA 2013 and 43 at AHA 2013.
Yet there are some important intersections among DH and philosophy. Beavers pointed to a rich history of scholarship in computational philosophy. With PhilPapers, philosophy is ahead of most other humanities disciplines in having an excellent online index to and growing repository of research. Most of the same challenges faced by philosophers with an interest in DH apply to other domains, such as figuring out how to acquire appropriate training (particularly for graduate students), recognizing and rewarding collaborative work, etc.
My talk was a remix and updating of my presentation “Why Digital Humanities?” In exploring the rationale for DH, I tried to cite examples relevant to philosophy. For example, the Stanford Encyclopedia of Philosophy, a dynamic online encyclopedia that predates Wikipedia, has had a significant impact, with an average of nearly a million weekly accesses during the academic year. With CT2.0, Peter Bradley aims to create a dynamic, modular, multimedia, interactive, community-driven textbook on critical thinking. Openness and collaboration also inform the design of Chris Long and Mark Fisher’s planned Public Philosophy Journal, which seeks to put public philosophy into practice by curating conversations, facilitating open review, encouraging collaborative writing, and fostering open dialogue. Likewise, I described how Transcribe Bentham is enabling the public to help create a core scholarly resource. I also discussed recent critiques of DH, including Stephen Marche’s “literature is not data,” the 2013 MLA session on the “dark side” of DH, and concerns that DH risks being elitist. I closed by pointing to some useful resources in DH and calling for open conversation among the DH and philosophy communities. With that call in mind, I wonder: Is it the case that philosophy is less actively engaged in digital humanities? If so, why, and what might be done to address that gap?
A few days ago I received a notice from Youtube about one of our videos. Apparently someone had marked it “inappropriate” and following review by Youtube staff the video was age-restricted.
The video in question is part of a series called “Favourite Things“, in which museum staffers select one of their favourite museum objects and describes it and why it is so special. In this particular video, Collections Manager Ion Meyer, is showing and describing three preparations of a so-called ischiopagus. That is, twins conjoined at the pelvis.
Since the video was published in March 2011 it has had almost 220,000 views. In comparison, the second-most watched video in our Youtube-channel has had less than 10,000 views. The ischiopagus video has also triggered more comments than is usual for our videos. We have tried to respond to all serious comments, but we also chosen not to respond to some, e.g.
Why would any parent let someone do this to their children! They need a proper burial! Bless there souls! <3
If you look at the Youtube guidelines, reasons for placing an age-restriction on a video include
However, they also highlight notable exceptions for
some educational, artistic, documentary and scientific content (e.g. health education, documenting human rights issues, etc.), but only if this is the sole purpose of the video and it is not gratuitously graphic…
Without proper context and explanations I can see how someone could feel the imagery in video could is disturbing. However, it should be clear that the purpose of this video is exactly as described in the exception.
Is the video inappropriate for young audiences? I don’t think so. However, Youtube provides no means of appealing an age-restriction imposed on a video, so it doesn’t really matter what we think. I wonder if other museums have had similar experiences with videos on Youtube?
You can see the video below and judge for yourself.
Steering partners and clients toward simpler web designs is one of the greatest services we can render. In consultations and collaborative projects, I often find myself advocating for less, less, less. This is especially true when it comes to color schemes—historians aren’t easily put off their beiges, navy blues, burgundies, and parchment textured backgrounds. I do not have any design training, so I have just as often been frustrated by my lack of appropriate and convincing language to explain that when it comes to color, less is often more. Until now.
Last week I met a design professor who gave me the words. “When we are teaching color to design students,” he said, “we always tell them to start with black, white, and red.” “You don’t have to stay there, but any time you stray from black, white, and red, you should have a good reason.” “It’s no accident Coca-Cola, Marlboro, and Santa Claus are the world’s most recognizable brands.”
To this list he added the highly stylized opening titles of the fashion setting television show, Mad Men. I immediately thought of Nike Air Jordans, and the covers of Time, Life, Newsweek, and The Economist. I’m sure there are many others. Black, white, and red just work. Please feel free to share additional examples in comments.
[Image credit: ididj0emama]
Of the many different courses I teach, the one I’ve made the fewest changes in over the past decade is my survey of modern Eastern Europe. Every other course I teach has been reconfigured in various ways as a result of my research into the scholarship of teaching and learning, but for some reason, I’ve never gotten around to altering this course. I’m ashamed to say that when I taught it last semester, it was really not that much different from the way I taught it for the first time way back in 1999.
I could offer various excuses for why that course seems so similar to its original incarnation, but really the only reason is inertia. I’ve rewritten four other courses and have created five others from scratch in the past six or seven years and because my East European survey worked reasonably well, it was last in line for renovation.
The good news for future students is that I’ve taught it that way for the last time.
Like all upper division survey courses, HIST 312 poses a particular set of challenges. Because we have no meaningful prerequisites in our department (except for the Senior Seminar, that requires students to pass Historical Methods), students can show up in my class having taken no history courses at the college level. And even if they had, the coverage of the region we used to call Eastern Europe is so thin in other courses, it is as though they had never taken another course anyway. That means I always spent a fair amount of time explaining just where we are talking about, who the people are who live there, and so on, before we get to the real meat and potatoes of the semester.
And then there is the fact that this course spans a century and eight countries (and then five more once Yugoslavia breaks up), it’s a pretty complex story.
To help students make sense of that complexity, over the years I’ve narrowed the focus of the course substantially, following Randy Bass’s advice to me many years ago: “The less you teach, the more they learn.” We focus on three main themes across all this complexity and by the end of the semester, most of the students seem to have a pretty good grasp of the main points I wanted to make. Or at least they reiterated those points to me on exams and final papers. And it’s worth noting that they like the course. I just got my end of semester evaluations from last semester and the students in that class rated it a 5.0 on a 5 point scale, while rating my teaching 4.94.
What I don’t know is whether they actually learned anything.
This semester I’m part of a reading group that is working its way through How Learning Works and this past week we discussed the research on how students’ prior knowledge influences their thinking about whatever they encounter in their courses. This chapter reminded me a lot of an essay by Sam Wineburg on how the film Forrest Gump has played such a large role in students’ learning about the Viet Nam wars. Drawing on the work of cognitive psychologists and their own research, Ambrose et al and Wineburg come to the same conclusion, namely, that it is really, really difficult for students (or us) to let go of prior knowledge, no matter how idiosyncratically acquired, when trying to make sense of the past (or any other intellectual problem).
The research they describe seems pretty compelling to me, especially because much of it comes from lab studies rather than water cooler anecdotes about student learning. Because it’s so compelling, I’ve decided to rewrite my course around the notion of working from my students’ prior knowledge. Getting from where they are when they walk in the room on the first day of the semester and where I want them to be at the final exam is the challenge that will animate me throughout the term.
My plan right now (and it’s a tentative plan because I won’t teach the course again for a couple of semesters) is to begin the semester with three short in class writing assignments on the three big questions/themes that run through the course. I want to know where my students are with those three before I try to teach them anything. Once I know where they are, then I can rejigger my plans for the semester to meet them where they are rather than where I might like them to be. And then as we complete various segments of the course I’ll have them repeat this exercise so I can see whether they are, as I hope, building some sort of sequential understanding the material. By the end of the semester I ought to be able track progress in learning (at least I hope I will), which is an altogether different thing than hoping to see evidence of the correct answer compromise.
As part of the upcoming workshop “It’s Not What You Think: Communicating Medical Materialities”, we are delighted to announce that the pioneering bioartist Oron Catts will be giving a public keynote lecture on Friday March 8th at 17.00 in the auditorium at Medical Museion.
Oron Catts is a prominent and defining figure in the emerging field of bioarts, which examines shifting perceptions of life through the lens of the life sciences. Famous for his work with The Tissue Culture and Art Project, he also co-founded the bioart lab SymbioticA at the University of Western Australia.
Here is the title and abstract for the talk, which can also be found on our seminar page:
The Puzzle of Neolifism, the Strange Materiality of Regenerative and Synthetically Biological Things.
In 1906 Jacques Loeb suggested making a living system from dead matter as a way to debunk the vitalists’ ideas and claimed to have demonstrated ‘abiogenesis’. In 2010 Craig Venter announced that he created “the first self-replicating cell we’ve had on the planet whose parent is a computer” the “Mycoplasma laboratorium” which is commonly known as Synthia. In a sense Venter claimed to bring Loeb’s dream closer to reality. What’s relevant to our story is that one of the main images Venter (or his marketing team) chose for the outing of Synthia was of two round cultures that looked like a blue eyed gaze; a metaphysical image representing the missing eyes of the Golem. These are the first bits of a jigsaw puzzle that will be laid in this talk. Through the notion of Neolifism, this puzzle will explore and Re/De-Contextualise the strange materiality of things and assertions of regenerative and synthetic biology. Other parts of the puzzle include a World War II crash site of a Junkers 88 bomber at the far north of Lapland, the first lab where the Tissue Culture & Art Project started to grow semi-living sculptures, frozen arks and de-extinctions, Alexis Carrel, industrial farms, Charles Lindbergh, worry dolls, rabbits’ eyes, ear-mouse, gas chambers, active biomaterials, in-vitro meat and leather, incubators, freak-shows, museums, ghost organs, drones, crude matter, mud and a small piece of Plexiglas that holds this puzzle together…
About Oron Catts:
Oron Catts is an artist, researcher and curator whose pioneering work with the Tissue Culture and Art Project, which he established in 1996, is considered a leading biological art undertaking. In 2000, Oron founded SymbioticA, an artistic research centre in the School of Anatomy, Physiology and Human Biology at The University of Western Australia. SymbioticA won the Prix Ars Electronica Golden Nica in Hybrid Art in 2007 and a year later became a Centre for Excellence. In 2009, Oron was listed in Thames & Hudson’s ‘60 Innovators Shaping our Creative Future’ and named by Icon Magazine (UK) as one of the ‘Top 20 designers making the future and transforming the way we work’. Oron’s interest is life itself or, more specifically, the shifting relations and perceptions of life in the light of new knowledge and its application. Often developed in collaboration with scientists and other artists, his body of work speaks volumes about the need for a new cultural articulation of evolving concepts of life. Oron has been a Research Fellow at Harvard Medical School and a Visiting Scholar at the Department of Art and Art History, Stanford University. He is currently the Director of SymbioticA, a Visiting Professor of Design Interaction at the Royal College of Arts, London, and a Visiting Professor at Aalto University’s Biofilia- base for Biological Arts, Helsinki. Oron’s work reaches beyond the confines of art, often being cited as an inspiration in areas as diverse as new materials, textiles, design, architecture, ethics, fiction and food.
Image credit – Crude Matter (2012) by The Tissue Culture & Art Project (Oron Catts and Ionat Zurr), installation detail from “SOFT CONTROL: Art, Science and the Technological Unconscious”, Koroška galerija likovnih umetnosti (KGLU), Slovenj Gradec.
Social media and public health is a diverse field, and there is always some new corner to explore! These days I am increasing my knowledge on the use of social media for disaster management and coordination. The reason for this is that I next week will be giving a lecture on the topic to students at the Master of Disaster Management at University of Copenhagen.
It has been exiting to dig into a new field and to experience how social media really presents great new opportunities, but of course also new challenges. Since I haven’t previously worked specifically with disaster management, I choose a few weeks ago to ask my Twitter followers for help on finding good literature and resource people in the field. And once again, Twitter didn’t let me down.
Blogs, website and hashtags
I got a lot of great inputs to blogs, websites, Twitter chats, hashtags and people to follow and hook up with on Twitter (a big thank you to all of you who responded!).
The blogs are a good starting point, especially since most of them offer great links to other resources. The most helpful so far have been the website/blog Social Media 4 Emergency Management. From here there is access to wikis, archives of Twitter chats (#smemchat), videos, blogs etc. on social media and emergency management. The only ‘problem’ with the website is that there is almost too much information.
Another super helpful resource is the blog idisaster2.0 (primarily run by @kim26stephens). It have lots of informative blog posts as well as a good bibliography of selected academic and government resources on social media and emergency management.
Own experiences with disasters and social media?
When I was asked to give the lecture, I hesitated for a moment, because what did I know about emergencies and disasters? Apart from my solid knowledge of social media in public health, including some superficial insight into its role in disasters, I had never had anything to do with disasters or least of all experienced it… However, the later is not true, I quickly realised. I have actually to some extend been in an emergency setting and I have in fact experienced the role of social media in a disaster situation.
Earthquake in Japan in 2011
I was in Japan, when the big earthquake, subsequent tsunami and finally the Fukushima nuclear plant crisis occurred in March 2011. Being relatively far from the epicenter of the disaster (I was based in Kobe in the Kansai region), I wasn’t directly surrounded by flooded buildings, elevated radiation risks or other immediate danger. But I was surrounded by potential danger, by worried friends and family in Denmark and by Japanese friends and colleagues with close relatives in the affected areas.
Looking back on my Facebook timeline, I can now see how social media actually played an important role for me during the emergency. I used Facebook to assure others that I was okay and kept them updated on my situation. I started following the Danish Embassy in Japan’s Facebook page through which they several times daily shared information about risks, advice on how to act and the organisation of potential evacuation. I encourage the mobilization of emotionally and financial support to Japan by sharing links and QR-codes. And I experienced how a Japanese colleague of mine after days of no contact with her sister living in Sendai where the tsunami hit, finally through Facebook got in contact and found out that her and her were safe…
So yes, I have actually experienced a disaster, and experienced how social media can be used in this kind of situation. I plan to share my experiences as a case with the students next week and hope that this real life experience can contribute to the understanding and some discussions.
Although I already got great tips from people on Twitter, I am still the happy receiver of inputs on social media and emergencies/disaster management. Suggestions on discussion topics, assignments or any other ideas on how to involve the students are more than welcome as are links to guidelines, scientific articles etc.
On December 21, 2012, Blake Ross—the boy genius behind Firefox and currently Facebook’s Director of Product—posted this to his Facebook page:
Some friends and I built this new iPhone app over the last 12 days. Check it out and let us know what you think!
The new iPhone app was Facebook Poke. One of the friends was Mark Zuckerberg, Facebook’s founder and CEO. The story behind the app’s speedy development and Zuckerberg’s personal involvement holds lessons for the practice of digital humanities in colleges and universities.
Late last year, Facebook apparently entered negotiations with the developers of Snapchat, an app that lets users share pictures and messages that “self-destruct” shortly after opening. Feeding on user worries about Facebook’s privacy policies and use and retention of personal data, in little more than a matter of weeks, Snapchat had taken off among young people. By offering something Facebook didn’t—confidence that your sexts wouldn’t resurface in your job search—Snapchat exploded.
It is often said that Facebook doesn’t understand privacy. I disagree. Facebook understands privacy all too well, and it is willing to manipulate its users’ privacy tolerances for maximum gain. Facebook knows that every privacy setting is its own niche market, and if its privacy settings are complicated, it’s because the tolerances of its users are so varied. Facebook recognized that Snapchat had filled an unmet need in the privacy marketplace, and tried first to buy it. When that failed, it moved to fill the niche itself.
Crucially for our story, Facebook’s negotiations with Snapchat seem to have broken down just weeks before a scheduled holiday moratorium for submissions to Apple’s iTunes App Store. If Facebook wanted to compete over the holiday break (prime time for hooking up, on social media and otherwise) in the niche opened up by Snapchat, it had to move quickly. If Facebook couldn’t buy Snapchat, it had to build it. Less than two weeks later, Facebook Poke hit the iTunes App Store.
Facebook Poke quickly rose to the top of the app rankings, but has since fallen off dramatically in popularity. Snapchat remains among iTunes’ top 25 free apps. Snapchat continues adding users and has recently closed a substantial round of venture capital funding. To me Snapchat’s success in the face of such firepower suggests that Facebook’s users are becoming savvier players in the privacy marketplace. Surely there are lessons in this for those of us involved in digital asset management.
Yet there is another lesson digital humanists and digital librarians should draw from the Poke story. It is a lesson that depends very little on the ultimate outcome of the Poke/Snapchat horse race. It is a lesson about digital labor.
Mark Zuckerberg is CEO of one of the largest and most successful companies in the world. It would not be illegitimate if he decided to spend his time delivering keynote speeches to shareholders and entertaining politicians in Davos. Instead, Zuckerberg spent the weeks between Thanksgiving and Christmas writing code. Zuckerberg identified the Poke app as a strategic necessity for the service he created, and he was not too proud to roll up his sleeves and help build it. Zuckerberg explained the management philosophy behind his “do it yourself” impulse in the letter he wrote to shareholders prior to Facebook’s IPO. In a section of the letter entitled “The Hacker Way,” Zuckerberg wrote:
The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it – often in the face of people who say it’s impossible or are content with the status quo….
Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: “Code wins arguments.”
Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win – not the person who is best at lobbying for an idea or the person who manages the most people….
To make sure all our engineers share this approach, we require all new engineers – even managers whose primary job will not be to write code – to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.
Now, listeners to Digital Campus will know that I am no fan of Facebook, which I abandoned years ago, and I’m not so naive as to swallow corporate boilerplate hook, line, and sinker. Nevertheless, it seems to me that in this case Zuckerberg was speaking from the heart and the not the wallet. As Business Insider’s Henry Blodget pointed out in the days of Facebook’s share price freefall immediately following its IPO, investors should have read Zuckerberg’s letter as a warning: he really believes this stuff. In the end, however, whether it’s heartfelt or not, or whether it actually reflects the reality of how Facebook operates, I share my colleague Audrey Watters’ sentiment that “as someone who thinks a lot about the necessity for more fearlessness, openness, speed, flexibility and real social value in education (technology) — and wow, I can’t believe I’m typing this — I find this part of Zuckerberg’s letter quite a compelling vision for shaking up a number of institutions (and not just “old media” or Wall Street).”
There is a widely held belief in the academy that the labor of those who think and talk is more valuable than the labor of those who build and do. Professorial contributions to knowledge are considered original research while librarians and educational technologists’ contributions to these endeavors are called service. These are not merely imagined prejudices. They are manifest in human resource classifications and in the terms of contracts that provide tenure to one group and, often, at will employment to the other.
Digital humanities is increasingly in the public eye. The New York Times, the Los Angeles Times, and the Economist all have published feature articles on the subject recently. Some of this coverage has been positive, some of it modestly skeptical, but almost all of it has focused on the kinds of research questions digital humanities can (or maybe cannot) answer. How digital media and methods have changed humanities knowledge is an important question. But practicing digital humanists understand that an equally important aspect of the digital shift is the extent to which digital media and methods have changed humanities work and the traditional labor and power structures of the university. Perhaps most important has been the calling into question of the traditional hierarchy of academic labor which placed librarians “in service” to scholars. Time and again, digital humanities projects have succeeded by flattening distinctions and divisions between faculty, librarians, technicians, managers, and students. Time and again, they have failed by maintaining these divisions, by honoring traditional academic labor hierarchies rather than practicing something like the hacker way.
Blowing up the inherited management structures of the university isn’t an easy business. Even projects that understand and appreciate the tensions between these structures and the hacker way find it difficult to accommodate them. A good example of an attempt at such an accommodation has been the “community source” model of software development advanced by some in the academic technology field. Community source’s successes and failures, and the reasons for them, illustrate just how important it is to make room for the hacker way in digital humanities and academic technology projects.
As Brad Wheeler wrote in EDUCAUSE Review in 2007, a community source project is distinguished from more generic open source models by the fact that “many of the investments of developers’ time, design, and project governance come from institutional contributions by colleges, universities, and some commercial firms rather than from individuals.” Funders of open source software in the academic and cultural heritage fields have often preferred the community source model assuming that, because of high level institutional commitments, the projects it generates will be more sustainable than projects that rely mainly on volunteer developers. In these community source projects, foundations and government funding agencies put up major start-up funding on the condition that recipients commit regular staff time—”FTEs”—to work on the project alongside grant funded staff.
The community source model has proven effective in many cases. Among its success stories are Sakai, an open source learning management system, and Kuali, an open source platform for university administration. Just as often, however, community source projects have failed. As I argued in a grant proposal to the Library of Congress for CHNM’s Omeka + Neatline collaboration with UVa’s Scholars’ Lab, community source projects have usually failed in one of two ways: either they become mired in meetings and disagreements between partner institutions and never really get off the ground in the first place, or they stall after the original source of foundation or government funding runs out. In both cases, community source failures lie in the failure to win the “hearts and minds” of the developers working on the project, in the failure to flatten traditional hierarchies of academic labor, in the failure to do it “the hacker way.”
In the first case—projects that never really get off the ground—developers aren’t engaged early enough in the process. Because they rely on administrative commitments of human resources, conversations about community source projects must begin with administrators rather than developers. These collaborations are born out of meetings between administrators located at institutions that are often geographically distant and culturally very different. The conversations that result can frequently end in disagreement. But even where consensus is reached, it can be a fragile basis for collaboration. We often tend to think of collaboration as shared decision making. But as I have said in this space before, shared work and shared accomplishment are more important. As Zuckerberg has it, digital projects are “inherently hands-on and active”; that “instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works”; that “the best idea and implementation should always win—not the person who is best at lobbying for an idea or the person who manages the most people.” That is, the most successful digital work occurs at the level of work, not at the level of discussion, and for this reason hierarchies must be flattened. Everyone has to participate in the building.
In the second case—projects that stall after funding runs out—decisions are made for developers (about platforms, programming languages, communication channels, deadlines) early on in the planning process that may deeply affect their work at the level of code sometimes several months down the road. These decisions can stifle developer creativity or make their work unnecessarily difficult, both of which can lead to developer disinterest. Yet experience both inside and outside of the academy shows us that what sustains an open source project after funding runs out is the personal interest and commitment of developers. In the absence of additional funding, the only thing that will get bugs fixed and forum posts answered are committed developers. Developer interest is often a project’s best sustainability strategy. As Zuckerberg says, “hackers believe that something can always be better, and that nothing is ever complete.” But they have to want to do so.
When decisions are made for developers (and other “doers” on digital humanities and academic technology projects such as librarians, educational technologists, outreach coordinators, and project managers), they don’t. When they are put in a position of “service,” they don’t. When traditional hierarchies of academic labor are grafted onto digital humanities and academic technology projects that owe their success as much to the culture of the digital age as they do to the culture of the humanities, they don’t.
Facebook understands that the hacker way works best in the digital age. Successful digital humanists and academic technologists do too.
[This post is based on notes for a talk I was scheduled to deliver at a NERCOMP event in Amherst, Massachusetts on Monday, February 11, 2013. The title of that talk was intended to be "'Not My Job': Digital Humanities and the Unhelpful Hierarchies of Academic Labor." Unfortunately, the great Blizzard of 2013 kept me away. Thankfully, I have this blog, so all is not lost.]
[Image credit: Thomas Hawk]
The web search community, in recent months and years, has heard quite a bit about the 'knowledge graph'. The basic concept is reasonably straightforward - instead of a graph of pages, we propose a graph of knowledge where the nodes are atoms of information of some form and the links are relationships between those statements. The knowledge graph concept has become established enough for it to be used as a point of comparison between Bing and Google.
Last night, I went to see a performance of Kodo - regarded internationally as the premier taiko group. A search on Bing for 'kodo' produced the following result:
Bing showed good results for the web and images as well as a knowledge driven portion of the answer from wikipedia with links to play some of their songs. Not bad - but no mention of the performance.
As Kodo were performing at Meany Hall on the University of Washington campus, I did another search on Bing for the venue:
Here we see something better - the venue is recognized as a venue and consequently joined with the events that are known to Bing, including the concert I was attending. As the event information included a link to the performer (the blue Kodo link in the screen shot) I followed through and found Bing gave me event information.
In these interactions, we can see part of the promise of the knowledge graph, but many areas for improvements. The event node relates the performer to the venue to the event. However the venue information in this part of the graph is isolated from that used to deliver the result for the query purely about the venue (note that the addresses are different - a common problem with campus and mall-like areas). The above experience, I think, shows the true challenge of the knowledge graph proposition - bringing all the isolated data graphs together correctly when the nodes in the graphs are actually representations of the same real world entities.
Note that in exploring this particular scenario, Bing appeared to be doing a little better than Google, though Google had partial event information associated with the Kodo entity.
Much of what we see out there in the form of knowledge returned for searches is really isolated pockets of related information (the date and place of brith of a person, for example). The really interesting things start happening when the graphs of information become unified across type, allowing - as suggested by this example - the user to traverse from a performer to a venue to all the performers at that venue, etc. Perhaps 'knowledge engineer' will become a popular resume-buzz word in the near future as 'data scientest' has become recently.
The early days of web search were essentially about observation. The web search engine observed the web (documents, links and user behaviours) and then delivered results based on those observations.
In recent years we have started to see more of a position of participation in web search engines. Examples of participation include:
Participation looks like a core strategy for search.