Social media and public health is a diverse field, and there is always some new corner to explore! These days I am increasing my knowledge on the use of social media for disaster management and coordination. The reason for this is that I next week will be giving a lecture on the topic to students at the Master of Disaster Management at University of Copenhagen.
It has been exiting to dig into a new field and to experience how social media really presents great new opportunities, but of course also new challenges. Since I haven’t previously worked specifically with disaster management, I choose a few weeks ago to ask my Twitter followers for help on finding good literature and resource people in the field. And once again, Twitter didn’t let me down.
Blogs, website and hashtags
I got a lot of great inputs to blogs, websites, Twitter chats, hashtags and people to follow and hook up with on Twitter (a big thank you to all of you who responded!).
The blogs are a good starting point, especially since most of them offer great links to other resources. The most helpful so far have been the website/blog Social Media 4 Emergency Management. From here there is access to wikis, archives of Twitter chats (#smemchat), videos, blogs etc. on social media and emergency management. The only ‘problem’ with the website is that there is almost too much information.
Another super helpful resource is the blog idisaster2.0 (primarily run by @kim26stephens). It have lots of informative blog posts as well as a good bibliography of selected academic and government resources on social media and emergency management.
Own experiences with disasters and social media?
When I was asked to give the lecture, I hesitated for a moment, because what did I know about emergencies and disasters? Apart from my solid knowledge of social media in public health, including some superficial insight into its role in disasters, I had never had anything to do with disasters or least of all experienced it… However, the later is not true, I quickly realised. I have actually to some extend been in an emergency setting and I have in fact experienced the role of social media in a disaster situation.
Earthquake in Japan in 2011
I was in Japan, when the big earthquake, subsequent tsunami and finally the Fukushima nuclear plant crisis occurred in March 2011. Being relatively far from the epicenter of the disaster (I was based in Kobe in the Kansai region), I wasn’t directly surrounded by flooded buildings, elevated radiation risks or other immediate danger. But I was surrounded by potential danger, by worried friends and family in Denmark and by Japanese friends and colleagues with close relatives in the affected areas.
Looking back on my Facebook timeline, I can now see how social media actually played an important role for me during the emergency. I used Facebook to assure others that I was okay and kept them updated on my situation. I started following the Danish Embassy in Japan’s Facebook page through which they several times daily shared information about risks, advice on how to act and the organisation of potential evacuation. I encourage the mobilization of emotionally and financial support to Japan by sharing links and QR-codes. And I experienced how a Japanese colleague of mine after days of no contact with her sister living in Sendai where the tsunami hit, finally through Facebook got in contact and found out that her and her were safe…
So yes, I have actually experienced a disaster, and experienced how social media can be used in this kind of situation. I plan to share my experiences as a case with the students next week and hope that this real life experience can contribute to the understanding and some discussions.
Your help
Although I already got great tips from people on Twitter, I am still the happy receiver of inputs on social media and emergencies/disaster management. Suggestions on discussion topics, assignments or any other ideas on how to involve the students are more than welcome as are links to guidelines, scientific articles etc.
On December 21, 2012, Blake Ross—the boy genius behind Firefox and currently Facebook’s Director of Product—posted this to his Facebook page:
Some friends and I built this new iPhone app over the last 12 days. Check it out and let us know what you think!
The new iPhone app was Facebook Poke. One of the friends was Mark Zuckerberg, Facebook’s founder and CEO. The story behind the app’s speedy development and Zuckerberg’s personal involvement holds lessons for the practice of digital humanities in colleges and universities.
Late last year, Facebook apparently entered negotiations with the developers of Snapchat, an app that lets users share pictures and messages that “self-destruct” shortly after opening. Feeding on user worries about Facebook’s privacy policies and use and retention of personal data, in little more than a matter of weeks, Snapchat had taken off among young people. By offering something Facebook didn’t—confidence that your sexts wouldn’t resurface in your job search—Snapchat exploded.
It is often said that Facebook doesn’t understand privacy. I disagree. Facebook understands privacy all too well, and it is willing to manipulate its users’ privacy tolerances for maximum gain. Facebook knows that every privacy setting is its own niche market, and if its privacy settings are complicated, it’s because the tolerances of its users are so varied. Facebook recognized that Snapchat had filled an unmet need in the privacy marketplace, and tried first to buy it. When that failed, it moved to fill the niche itself.
Crucially for our story, Facebook’s negotiations with Snapchat seem to have broken down just weeks before a scheduled holiday moratorium for submissions to Apple’s iTunes App Store. If Facebook wanted to compete over the holiday break (prime time for hooking up, on social media and otherwise) in the niche opened up by Snapchat, it had to move quickly. If Facebook couldn’t buy Snapchat, it had to build it. Less than two weeks later, Facebook Poke hit the iTunes App Store.
Facebook Poke quickly rose to the top of the app rankings, but has since fallen off dramatically in popularity. Snapchat remains among iTunes’ top 25 free apps. Snapchat continues adding users and has recently closed a substantial round of venture capital funding. To me Snapchat’s success in the face of such firepower suggests that Facebook’s users are becoming savvier players in the privacy marketplace. Surely there are lessons in this for those of us involved in digital asset management.
Yet there is another lesson digital humanists and digital librarians should draw from the Poke story. It is a lesson that depends very little on the ultimate outcome of the Poke/Snapchat horse race. It is a lesson about digital labor.
Mark Zuckerberg is CEO of one of the largest and most successful companies in the world. It would not be illegitimate if he decided to spend his time delivering keynote speeches to shareholders and entertaining politicians in Davos. Instead, Zuckerberg spent the weeks between Thanksgiving and Christmas writing code. Zuckerberg identified the Poke app as a strategic necessity for the service he created, and he was not too proud to roll up his sleeves and help build it. Zuckerberg explained the management philosophy behind his “do it yourself” impulse in the letter he wrote to shareholders prior to Facebook’s IPO. In a section of the letter entitled “The Hacker Way,” Zuckerberg wrote:
The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it – often in the face of people who say it’s impossible or are content with the status quo….
Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: “Code wins arguments.”
Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win – not the person who is best at lobbying for an idea or the person who manages the most people….
To make sure all our engineers share this approach, we require all new engineers – even managers whose primary job will not be to write code – to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.
Now, listeners to Digital Campus will know that I am no fan of Facebook, which I abandoned years ago, and I’m not so naive as to swallow corporate boilerplate hook, line, and sinker. Nevertheless, it seems to me that in this case Zuckerberg was speaking from the heart and the not the wallet. As Business Insider’s Henry Blodget pointed out in the days of Facebook’s share price freefall immediately following its IPO, investors should have read Zuckerberg’s letter as a warning: he really believes this stuff. In the end, however, whether it’s heartfelt or not, or whether it actually reflects the reality of how Facebook operates, I share my colleague Audrey Watters’ sentiment that “as someone who thinks a lot about the necessity for more fearlessness, openness, speed, flexibility and real social value in education (technology) — and wow, I can’t believe I’m typing this — I find this part of Zuckerberg’s letter quite a compelling vision for shaking up a number of institutions (and not just “old media” or Wall Street).”
There is a widely held belief in the academy that the labor of those who think and talk is more valuable than the labor of those who build and do. Professorial contributions to knowledge are considered original research while librarians and educational technologists’ contributions to these endeavors are called service. These are not merely imagined prejudices. They are manifest in human resource classifications and in the terms of contracts that provide tenure to one group and, often, at will employment to the other.
Digital humanities is increasingly in the public eye. The New York Times, the Los Angeles Times, and the Economist all have published feature articles on the subject recently. Some of this coverage has been positive, some of it modestly skeptical, but almost all of it has focused on the kinds of research questions digital humanities can (or maybe cannot) answer. How digital media and methods have changed humanities knowledge is an important question. But practicing digital humanists understand that an equally important aspect of the digital shift is the extent to which digital media and methods have changed humanities work and the traditional labor and power structures of the university. Perhaps most important has been the calling into question of the traditional hierarchy of academic labor which placed librarians “in service” to scholars. Time and again, digital humanities projects have succeeded by flattening distinctions and divisions between faculty, librarians, technicians, managers, and students. Time and again, they have failed by maintaining these divisions, by honoring traditional academic labor hierarchies rather than practicing something like the hacker way.
Blowing up the inherited management structures of the university isn’t an easy business. Even projects that understand and appreciate the tensions between these structures and the hacker way find it difficult to accommodate them. A good example of an attempt at such an accommodation has been the “community source” model of software development advanced by some in the academic technology field. Community source’s successes and failures, and the reasons for them, illustrate just how important it is to make room for the hacker way in digital humanities and academic technology projects.
As Brad Wheeler wrote in EDUCAUSE Review in 2007, a community source project is distinguished from more generic open source models by the fact that “many of the investments of developers’ time, design, and project governance come from institutional contributions by colleges, universities, and some commercial firms rather than from individuals.” Funders of open source software in the academic and cultural heritage fields have often preferred the community source model assuming that, because of high level institutional commitments, the projects it generates will be more sustainable than projects that rely mainly on volunteer developers. In these community source projects, foundations and government funding agencies put up major start-up funding on the condition that recipients commit regular staff time—”FTEs”—to work on the project alongside grant funded staff.
The community source model has proven effective in many cases. Among its success stories are Sakai, an open source learning management system, and Kuali, an open source platform for university administration. Just as often, however, community source projects have failed. As I argued in a grant proposal to the Library of Congress for CHNM’s Omeka + Neatline collaboration with UVa’s Scholars’ Lab, community source projects have usually failed in one of two ways: either they become mired in meetings and disagreements between partner institutions and never really get off the ground in the first place, or they stall after the original source of foundation or government funding runs out. In both cases, community source failures lie in the failure to win the “hearts and minds” of the developers working on the project, in the failure to flatten traditional hierarchies of academic labor, in the failure to do it “the hacker way.”
In the first case—projects that never really get off the ground—developers aren’t engaged early enough in the process. Because they rely on administrative commitments of human resources, conversations about community source projects must begin with administrators rather than developers. These collaborations are born out of meetings between administrators located at institutions that are often geographically distant and culturally very different. The conversations that result can frequently end in disagreement. But even where consensus is reached, it can be a fragile basis for collaboration. We often tend to think of collaboration as shared decision making. But as I have said in this space before, shared work and shared accomplishment are more important. As Zuckerberg has it, digital projects are “inherently hands-on and active”; that “instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works”; that “the best idea and implementation should always win—not the person who is best at lobbying for an idea or the person who manages the most people.” That is, the most successful digital work occurs at the level of work, not at the level of discussion, and for this reason hierarchies must be flattened. Everyone has to participate in the building.
In the second case—projects that stall after funding runs out—decisions are made for developers (about platforms, programming languages, communication channels, deadlines) early on in the planning process that may deeply affect their work at the level of code sometimes several months down the road. These decisions can stifle developer creativity or make their work unnecessarily difficult, both of which can lead to developer disinterest. Yet experience both inside and outside of the academy shows us that what sustains an open source project after funding runs out is the personal interest and commitment of developers. In the absence of additional funding, the only thing that will get bugs fixed and forum posts answered are committed developers. Developer interest is often a project’s best sustainability strategy. As Zuckerberg says, “hackers believe that something can always be better, and that nothing is ever complete.” But they have to want to do so.
When decisions are made for developers (and other “doers” on digital humanities and academic technology projects such as librarians, educational technologists, outreach coordinators, and project managers), they don’t. When they are put in a position of “service,” they don’t. When traditional hierarchies of academic labor are grafted onto digital humanities and academic technology projects that owe their success as much to the culture of the digital age as they do to the culture of the humanities, they don’t.
Facebook understands that the hacker way works best in the digital age. Successful digital humanists and academic technologists do too.
[This post is based on notes for a talk I was scheduled to deliver at a NERCOMP event in Amherst, Massachusetts on Monday, February 11, 2013. The title of that talk was intended to be "'Not My Job': Digital Humanities and the Unhelpful Hierarchies of Academic Labor." Unfortunately, the great Blizzard of 2013 kept me away. Thankfully, I have this blog, so all is not lost.]
[Image credit: Thomas Hawk]
The web search community, in recent months and years, has heard quite a bit about the 'knowledge graph'. The basic concept is reasonably straightforward - instead of a graph of pages, we propose a graph of knowledge where the nodes are atoms of information of some form and the links are relationships between those statements. The knowledge graph concept has become established enough for it to be used as a point of comparison between Bing and Google.
Last night, I went to see a performance of Kodo - regarded internationally as the premier taiko group. A search on Bing for 'kodo' produced the following result:
Bing showed good results for the web and images as well as a knowledge driven portion of the answer from wikipedia with links to play some of their songs. Not bad - but no mention of the performance.
As Kodo were performing at Meany Hall on the University of Washington campus, I did another search on Bing for the venue:
Here we see something better - the venue is recognized as a venue and consequently joined with the events that are known to Bing, including the concert I was attending. As the event information included a link to the performer (the blue Kodo link in the screen shot) I followed through and found Bing gave me event information.
In these interactions, we can see part of the promise of the knowledge graph, but many areas for improvements. The event node relates the performer to the venue to the event. However the venue information in this part of the graph is isolated from that used to deliver the result for the query purely about the venue (note that the addresses are different - a common problem with campus and mall-like areas). The above experience, I think, shows the true challenge of the knowledge graph proposition - bringing all the isolated data graphs together correctly when the nodes in the graphs are actually representations of the same real world entities.
Note that in exploring this particular scenario, Bing appeared to be doing a little better than Google, though Google had partial event information associated with the Kodo entity.
As these names are possibly taken from the listings information from different sources, the name of the performer is confusingly presented in different forms.
Much of what we see out there in the form of knowledge returned for searches is really isolated pockets of related information (the date and place of brith of a person, for example). The really interesting things start happening when the graphs of information become unified across type, allowing - as suggested by this example - the user to traverse from a performer to a venue to all the performers at that venue, etc. Perhaps 'knowledge engineer' will become a popular resume-buzz word in the near future as 'data scientest' has become recently.
The early days of web search were essentially about observation. The web search engine observed the web (documents, links and user behaviours) and then delivered results based on those observations.
In recent years we have started to see more of a position of participation in web search engines. Examples of participation include:
Participation looks like a core strategy for search.
What do we really know about how our students generate answers to historical questions? Thanks to Sam Wineberg, Peter Seixas, Bob Bain, Stephane Levesque, and others in their orbits, we know a good bit about how K-12 history students reach their conclusions about the past, but when it comes to higher education, we know far too little. In fact, we’re often puzzled by the answers our students arrive at. Why did they assign great importance to a particular piece of evidence when our view is that this piece of evidence was just a of run of the mill source, not particularly worthy of extra attention? Why is it so hard to shake them from their belief that, say, people in the past wanted the same things that people today want?
To date, too many of our answers to these and other such questions have been based on folk wisdom about “kids today” or an over reliance on what we observe in our classrooms as being representative of “all students.” Real research, based on real data, would surely take us much farther down the road toward understanding how our students think.
Fortunately, scholars in other disciplines than history have done some hard thinking about these issues and, just as fortunately, have done that real research generating real data.
It’s not every day that a historian reads an article with a title like, “The Role of Intuitive Heuristics in Students’ Thinking: Ranking Chemical Substances,” but read it you should. [Science Education, 94/6, November 2010: 963-84] The authors, Jenine Maeyer and Vincente Talanquer, proceed from the assumption that the we better understand how our students think, the better our curricula can be. This is an entirely different approach from one that asks, “What should students who graduate with a degree in chemistry/history/sociology know?” That question needs to be answered in every discipline, but if learning is the goal of our teaching, then we must understand how that learning occurs as we design those curricula. To do otherwise is to waste our time and our students’.
Maeyer and Talanquer begin with a question: What are the cognitive constraints that impede their students’ ability to engage in the kind of careful and complex analysis that they want to induce in their courses? Drawing on 30 year’s worth of research from cognitive science as well as classroom research in the sciences, they describe two constraints and four reasoning strategies arising from those constraints. While they are writing about the analysis of chemical substances, a history teacher could very easily substitute “primary sources” and “history” for “substances” and “chemistry” and learn a lot from their results.
The two cognitive constraints they describe are implicit assumptions and heuristics (short cut reasoning procedures). In history, an implicit assumption would be that during the era of the women’s suffrage movement, all women wanted the vote, because of course women would want the vote. These implicit assumptions are very powerful and difficult to break down, in large part because they are so rooted in a learner’s view of how the world is.
Heuristics are the root of many problems in education in whatever discipline, but the authors argue that if students can learn how these heuristics govern their analytical strategies, they can then begin to learn differently. And once that happens, they are more likely to examine their implicit assumptions about the world.
All of us are beneficiaries and victims of our own heuristics. For example, the quick thinking that results from years of driving experience helps us recognize, without even thinking about it, that the car in front of us is about to do something stupid, so we slow down and give the driver room to do whatever he is about to do. The short cut reasoning procedures we develop as drivers lead us to reasonable conclusions at lightning speed.
But our short cut reasoning can also lead to into errors of analysis. Maeyer and Talanquer identify four heuristics that get in the way of the kinds of learning we want to induce: the representativeness heuristic, the recognition heuristic, one-reason decision making, and the arbitrary trend heuristic.
The representativeness heuristic is one in which we judge things as being similar based on how much they resemble one another at first glance. We see this often in our history classrooms as, for instance, when a student leaps to the conclusion that two works of art separated by both temporal and cultural boundaries must be similar because they kind of look alike.
The recognition heuristic is what happens when we look at a number of pieces of historical evidence, but recognize only one of them, and so assign a higher value to the one we recognize for no reason other than that we recognize it. In the history classroom, this happens when a student is confronted with four or five texts, one of which is familiar, and so focuses all of her attention on that text, to the point of deciding that this text is the most important in the group, even if it is not.
One-reason decision making happens when students make their decisions about evidence based on a single differentiating characteristic of that evidence. So, for instance, in that group of four or five texts, our student might decide that because only one of them actually mentioned something of importance that she is studying, it is somehow more important than the other four when trying to figure out what happened back when the texts were written.
The arbitrary trend heuristic is one we see not only in our students, but in the works of our colleagues. Because several historical sources were generated within a few miles of one another, or within a few weeks of one another, we assume that they must, somehow, be connected to one another, without any evidence to support this hypothesis.
All of these heuristics occur at various moments throughout the semester in our classrooms, regardless of the discipline we teach. Not all students utilize these short cut strategies all the time, but most of them deploy one or the other at some point in semester. Knowing that this is the case, we can then design our courses to address these thinking strategies.
I wish someone had assigned me this article 20 years ago. Of course, it hadn’t been written yet, so that wouldn’t have been possible. But if it had, and I’d read it back before I started teaching history, my life would have been so much easier and my student’s learning would have been so much richer.
I’ve had this call for papers for the ‘The Return of Biography: Reassessing Life Stories in Science Studies’ workshop at Science Museum on 18 July laying on my desktop for months:
The lived life serves as an organising principle across disciplines. We talk of the biographies of things and places, and we use personal narratives to give shape to history. Biography is central to historians’ work but often unacknowledged and untheorised: it is used to inspire and to set examples, and to order our thinking about the world, but is a primarily a literary mode; biographies written for popular audiences provide material for the most abstruse work across disciplines; and the canon of well-known lives dictates fashions in research.
For historians of science, technology and medicine this is a particularly pressing issue: their discipline is founded on the ‘great men’ account of discovery and advance, and, though that has long since been discarded, the role of the individual in historical narratives has not diminished, and heroic tales have themselves become a legitimate subject of inquiry. For writers and researchers in other fields, the question remains: how do the lives of individuals intersect with cultural trends and collective enterprise?
It has been laying there since November because there are so many different things in it I would like to take issue with:
- Isn’t the notion of ‘return’ of biography long overdue?
- Does the notion of ‘biographies’ of things and places make sense?
- Are biography and historiography necessarily narrative (story-telling) genres?
- Is it really true that the role of the individual in historical writing hasn’t diminished?
But given the restriction of a 20 minutes talk and my need to say something new, I eventually found out (but not until deadline day, last Friday) that I would rather like to engage with the explicit occasion for the workshop meeting — Science Museum’s Turing-exhibition — and ask whether biographical museum exhibitions are really possible?
I have to confess I haven’t seen Codebreaker yet — but will certainly do so, before the workshop (and if my abstract is accepted).
However, I have long been thinking about making a biographical exhibition here at Medical Museion. I would like to be able to combine the two major strands of my scholarly life so far, which are (1) writing (about) biography and (2) curating (and reflecting on) the use of material artefacts in science museum exhibitions on the other.
So far, however, I haven’t really seriously tried — and I think there are two reasons for this lack of action from my side.
One is more conceptual, having to do with the uncertain role of material things in the life-courses of scientists as opposed to the role of ideas, concepts, writing, etc. Symbols and text on paper and images have such a prominent place in the self-awareness of scientists. Just read their autobiographies; there are ideas, concepts, theories etc. on every page. But material artefacts play a much more humble role in the way scientists understand themselves in interviews and autobiographical reports.
The other reason is more practical: scientists like to save documents and images from their work for the archives and archives are by tradition often organised in person-defined document collections. But scientists rarely donate the material things they have worked with to museum collections. Material artefacts are mostly collected by museum with an eye to the historical importance of the things rather than as personal material archives.
All this makes it difficult to display the material life of an individual scientist. The ‘material turn’ in the humanities doesn’t easily translate into artefact-based museum exhibitions about lives in science.
(featured image: cover of T. Soderqvist, ed., The History and Poetics of Scientific Biography, Ashgate 2007)
One episode closer to the century mark, Amanda, Dan, Mills, and Tom welcome Kathleen Fitzpatrick and Tim Carmody for a debriefing on digital developments at the annual meetings of the MLA and AHA and a discussion of the tragic suicide of programmer and activist Aaron Swartz.
Links mentioned on the podcast:
Dan Cohen, Digital History at the 2013 AHA Meeting
Mark Sample, Digital Humanities at MLA 2013
MLA Commons
Aaron Swartz (Wikipedia)
Tim Carmody, Memory to myth: tracing Aaron Swartz through the 21st century
Running time: 58:04
Download the .mp3
Last week I was delighted to be back at my old stomping grounds at Rice University’s Digital Media Commons to lead a workshop on “Doing Things with Text.” The workshop was part of Rice’s Digital Humanities Bootcamp Series, led by my former colleagues Geneva Henry and Melissa Bailar. I hoped to expose participants to a range of approaches and tools, provide opportunities for hands-on exploration and play, and foster discussion about the advantages and limitations of text analysis, topic modeling, text encoding, and metadata. Although we ran out of time before getting through my ambitious agenda, I hope my slides and exercises provide useful starting points for exploring text analysis and text encoding.
Here’s my short speech at the opening of Biohacking: Do it yourself! last Thursday evening:
In true hacker style, this opening is somewhat ad hoc-ish. We will spend about 20 minutes up here in the old auditorium; several people will say a few introductory words each, in several languages.
Then — because there isn’t room for us all down there — the speakers will go downstairs to the biohacker lab, where they will make the official opening (clip, clip with the scissor) while the web camera projects on the screen. And finally you will get drinks and popcorn from the microwave while you can move freely around between this floor and the biohacker lab.
So why are we doing this? What’s a biohacker lab doing in a medical museum and in this venerable old building from 1787? It’s not an irrelevant question, because some of our visitors think a museum like ours should restrict itself to real medical history – the history of epidemic diseases, surgical instruments from the 18th and 19th enturies, gory human body parts etc.
OK, believe it or not, we’re still in the history business. We’re still displaying things from the gruesome medical past. But we are also very eager to engage with the present and the future. As some of you know, our latest exhibition is about the current obesity epidemic and the brand new treatment method called gastric bypass surgery that accidentally also cures type 2 diabetes.
In the exhibition (or rather installation) you’ll see tonight, we’re taking yet another step away from the past, to the future of biology and medicine — to the emerging worlds of synthetic biology and biohacking.
Other speakers will say more about synthetic biology and biohacking in a few moments. I’ll just give you the background to this project.
The idea behind the exhibition/installation started three years ago, when some ten small European science centers and art institutions met at Le Laboratorie in Paris to prepare an application from the European Community for an art-science project, called StudioLab.
One of the themes we decided on at the Paris meeting synthetic biology – a very hot topic among life scientists. Using small parts of life to build more complicated living parts. Like in the famous Lego bricks.
What was then, three years ago, a pretty vague idea, has now materialized in a very concrete art-design-science installation –thanks to an interdisciplinary collaboration between a couple of biohackers and scientists, an installation designer, a science communication specialist and a historian of ancient technology. They come from the UK, Germany, the United States, and Denmark, so this is a truly international project team, based locally here in Copenhagen.
Before I give the word over to these people who made this come true, I will say that it hasn’t escaped my notice that the idea of biohacking may have further implications for a museums like ours, and maybe for museums in general.
Because there’s something in the hacker culture – whether it’s computer hacking or biohacking – that points to the ongoing cultural change in the museum world. As I said to one of the biohackers at dinner earlier tonight: museums are struggling to become more open, to involve their users, to draw on the creativity of non-professionals, to crowdsource the cultural heritage, to engage citizens in the construction and re-construction of collections and exhibitions. The do-it-yourself attitude is spreading to museums too.
This is what some museum people call ‘museum 2.0‘. It’s pretty similar to what social media are doing to the world of publishing right now. Or what biohackers are trying to do for the life sciences.
As a museum I think we have very much to learn from the hacking culture – and I’m proud that we have been able to engage people from the local biohacker community here in Copenhagen to help us – not only to open this particular installation – but in the long run help us rethink what a museum might be.
Now, I will give the word to Rüdiger Trojok, a molecular biologist who’s currently finishing his Masters at the Technical University of Denmark; Malthe Borch, who has a masters in Biological engineering, and who’s a co-founder of the local biohacker space BiologiGaragen here in Copenhagen; and Sara Krugman, who’s an interaction designer, and currently completing her masters at The Copenhagen School of Interaction Design.
(Rüdiger, Malthe and Sara give short speeches)
Thank you, Rüdiger, Malthe and Sara! And now over to Emil Polny, who’s a project coordinator at the Center for Synthetic Biology here at the University of Copenhagen.
(Emil gives a short speech)
Thank you, Emil! And finally I’ll give the word to the people here at Medical Museion who have organised and curated the biohacking space, namely Karin Tybjerg, who’s an associate professor with a background in the history of science and technology and Louise Whiteley, who’s assistant professor with a background in theoretical neuroscience and science communication studies.
(Karin and Louise give short speeches)
Thank you Karin and Louise! And now comes the tricky logistical part of the opening. I will ask you all to wait here for two minutes – and we’ll show a short video while you wait – while our presenters walk down to the biohacking lab to open it. The reason is the lab room is so small, we cannot all be in there – so they will cut the ribbon in front of a video camera – and we’ll transmit it over the web and stream up on the screen behind me. And after they have cut the ribbon you can do whatever you want – take drink, eat some popcorn, sit and talk – or even go down and visit the biohacker space.
Thank you very much! Enjoy your evening.
You’ve probably been there. A new job, a new project team, a new client. A great first meeting. Everyone is invited to talk, to listen, to contribute. Everyone is assured that their voices will be heard, their concerns addressed, their ideas taken seriously.
Fast forward a week, a month, a year. One by one, those voices have been silenced, those concerns dismissed, those ideas undermined. What remains are the ideas and concerns of the person who (it has now become clear) is in charge.
To do their jobs effectively, members of a project team need to know who the decision maker is. We all like democracy, those of us in education and cultural heritage especially so. If it’s truly a democracy, great. But if it’s a dictatorship, people would rather know from the outset than be led down a rhetorical primrose path of “democracy,” “consensus,” and “collaboration” only to have the rug pulled out from under them when the decision maker finally decides to assert his or her will.
If you are the decision maker, let us know. Anything less treats team members like children and wastes everybody’s time. What’s worse, it makes for shortsighted, haphazard, second-rate work product.
Having recently returned from a trip to Kauai where I used my beach search engine with middling success, I've now got a few updates out on the site.
Firstly, there is a full map showing either all the beaches in a location, or all the beaches from a search within a location. This was a pretty obvious missing feature.
Secondly, as this is an active map, you can zoom and pan the map which interactively restricts the set of results.
There are some minor improvements to other elements of the site as well.
Note - something that always interests me is the relationship between back-end data quality and the presentation of the data. By having a complete map of beaches, it highlights cases where there are duplicates in the results (a topic for another post).
If you are heading to Hawaii - give it a try and let me know how you get on.
Related articles Snorkel* and Surf* in Kauai and Maui The State of Hawai'i Demands a New Search Engine