Lots of people have heard of Parkinson’s Law:
Work expands so as to fill the time available for its completion
Anyone who has worked for any length of time knows this to be true. Looking at it from another angle, you’re never quite finished - yes, you finish projects, empty inboxes, get through to-do lists but there’s always, always something else.
When you work for someone else, this is ok – you reach your lunch break or 5.30 or whenever you’re due to stop and as long as you know you’ve done what you can and your boss isn’t going to fire you / shout at you / give the job to someone else, you find it easy(ish) to walk away.
The challenge is however made very different when you work for yourself. Then, those outside constraints – the “it-really-is-time-to-stop” alarm clock – don’t exist.
I reckon there are three main reasons why this is:
1. Your edges – your work-life balance – aren’t so clearly cut. This might be because physically they’re blurred – i.e. you work from home and your desk is also your dinner table; or it might be more about the intangible – you can (and do) access your email at any time of the night and day. Your business becomes your life and your life becomes your business.
2. It’s YOUR thing – your business, your company, your idea, your reputation – saying “fuck it, I need to stop” becomes infinitely much harder when you’re embedded in something you believe and have invested in.
3. There genuinely isn’t an end to the work that needs to be done when you’re working for yourself. Yes, you might have got to inbox-0, got all the client work out of the way and done your invoicing, but there’s always the improvements, the business development, the file shuffling, receipt printing, content writing….
I’ve worked for myself running a digital agency with my wife now for coming up to two years. I love it, and we both work extremely hard at it, but I’ve only recently come to see that a positive acceptance of Parkinson’s Law (rather than a resistance to it) is a hugely important thing for the self-employed. I know far too many people (you know who you are) who work for themselves and stress the hell out of their entire lives 24/7. They might be doing incredible stuff, but many of them spend their weekends and evenings working and their lives stressing.
By positively accepting that I’ll never, ever get everything done – and it’s ok for this to be the case – I have found it hugely much easier to find a sane, guilt-free, family-friendly work/life balance. As an example, we’re now working to a 9am-3pm daily schedule (which fits in with school hours) and try to use Thursdays and Fridays as “look ahead” days to develop new ideas and processes. The short day thing is highly effective – we get as much done in those intensive 6 hours than we would in a “normal” day of 8 hours AND I get the pleasure of hanging out with my kids after school too. The Thursday/Friday thing is challenging at times as client work almost always tries to invade time set aside for future-thinking, but we’re getting better at being disciplined with this. Evenings and weekends are – with very, very occasional exceptions – sacred, set aside for non-work stuff.
It seems to me that one of the huge luxuries of working for yourself – and one that surprisingly few self-employed people I know take advantage of – is the flexibility to choose when NOT to work.
Another piano fiddling – as always it’s a work in progress and needs lots of stuff to make it more complete, but I like the sequence and think it might have legs..
So the whole NSA thing kicked off and the entire internet is full of commentary, as you’d expect.
As with any new piece of news, HUGE REVELATION is followed by some detailed picking apart. (Right now, the biggie seems to be the “what does ‘direct access’ to servers actually mean?” – next up, “Why did Edward Snowden identify himself as the whistleblower?”).
The interesting thing about this debating is that although it’s clearly a good thing to pick apart potentially sensationalist bits of broad-brush news “YOU ARE BEING SPIED ON” and focus on the detail “WHAT DOES ‘SPYING’ MEAN IN THIS CONTEXT?”, there is also – I think – a danger that if the focus becomes too specialised you not only lose audience interest and impetus as the detail is debated by experts in that particular niche field, but you also potentially lose sight of the big picture.
This picture seems to me to be the single most important thing, and it echoes Snowden’s stated reasons for coming forward:
I don’t want to live in a society that
does these sorts of things
This isn’t about the detailed debate as to whether this kind of surveillance helps or hinders terrorism, this isn’t about what “metadata” is in this context, it isn’t even about where particular allegiances lie. It’s about the flavour of the place we want to create as a civilised, intelligent and compassionate society.
In debating the importance of privacy with friends, the most common response is this: “I’m innocent. I have nothing to fear”, and it is almost exactly these words that the British government is using in pretty much every interview I’ve heard about NSAgate. “British citizens have nothing to fear” said Malcolm Rifkind on Radio 4 today: sub-text: “If you’re guilty, fear. If you’re not, fear not”.
Really? So you’re happy sitting in a pub chatting to your friends for a total stranger to pull up a chair and listen in to you talking about the fact you fancy the barman? You’re ok with someone borrowing your phone and looking at the last ten numbers you dialled? You have no problem at all with someone totally unknown friending you on Facebook, or reading your diary? You’re innocent, right, so all of this is ok to you? ”It’s just metadata” you say – “the Government doesn’t know what we said, they just know we you said it to, and that’s ok”.
Well look, here’s what the EFF says about metadata:
> They know you rang a phone sex service at 2:24 am and spoke for 18 minutes. But they don’t know what you talked about.
> They know you called the suicide prevention hotline from the Golden Gate Bridge. But the topic of the call remains a secret.
> They know you spoke with an HIV testing service, then your doctor, then your health insurance company in the same hour. But they don’t know what was discussed.
> They know you received a call from the local NRA office while it was having a campaign against gun legislation, and then called your senators and congressional representatives immediately after. But the content of those calls remains safe from government intrusion.
> They know you called a gynecologist, spoke for a half hour, and then called the local Planned Parenthood’s number later that day. But nobody knows what you spoke about.
The thing is, the surveillance state that can be developed today is abusable in a way that is entirely unprecedented. Is there anyone out there who can genuinely claim that they have never tweeted or posted a comment, visited a website, sent a text, received an email which – if taken out of context – couldn’t be used in nefarious ways by the next Government or employer who wasn’t quite so happy that you went on that anti-war march ten years ago? I don’t think so. The “innocence” of your actions is in the eyes of the beholder; this innocence is contextual and changing with time and with circumstance. Blanket, panopticon-like surveillance of the kind described by Snowden sets a wholly dangerous precedent.
The bigger point is surely this: do we want to live in a society where someone is watching all the time?
However “innocent” you are, I don’t think you do.
I’m pleased with this one – it’s a nice mellow sequence and thanks to a re-share by the awesome ASIP, my most played on Soundcloud by a long, long margin…
[...] Ideum: Read this blog to learn about museum exhibit and design news. [...]
“See! Now! Our sentence is up.”
That’s the last line of the last page of the last issue of The Invisibles, Grant Morrison’s pop magic comic book master work. That final issue came out right around Y2K, but it’s set on the December solstice of what was then the freaky-sounding future year 2012. All this year, every time I heard somebody cracking wise about the Mayan Apocalypse, I thought, “Unless you’re an ancient Mayan, you’re stealing Grant Morrison’s bit.”
I bought and read every issue of The Invisibles as it came out from 1994 to 2000. It’s the only comic I’ve ever followed so religiously. It’s brilliant and fun and a bit of a mess and it meant the world to me. It worked its way into my life and rewired the way I saw things, which is pretty much what it was intended to do. Yes, it’s dated now, but so am I. I can’t be any more objective about it than I could be objective about my twenties.
My first pointless argument with a stranger on the internet—you never forget your first time—was on an Invisibles fan site, and it was about whether or not the world was really going to end on December 21, 2012. The world of the comic book, that is. There’s much talk in The Invisibles about the End of Days, the Mayapocalypse, Glitterdammerung, you name it, but I figured all along that Grant was going to invoke the “As We Know It” clause—the end of the world wouldn’t be a literal end but just the dawning of a new age, an evolution to a some higher (read: more Grant Morrison-like) state of being. When an early issue featured a flash forward to the year 2051 or so, showing the young protagonist Dane McGowan dying of old age, I said this proved Dane’s world wouldn’t end in 2012. My internet interlocutor said this was a false vision sent by the comic’s demonic baddies to demoralize Dane. I’m not sure how demoralizing it is to be told you will die peacefully in bed at the age of eighty-three, but the Archons of the Outer Church work in mysterious ways.
And when it comes, I think you will agree that the difference between being crushed by the massive palm of the headless body of NUG-SHOHAB on the ruined plain of RAGNAROK versus dying alone in a hospital room with a television flickering images at you of a football player dancing with the stars is so small that it is not worth arguing over.
That’s John Hodgman in what has to be the year’s best apocalypse, That Is All. Now, Hodgman’s not stealing Morrison’s bit. He probably read all about the Mayan calendar back in 1979 in The People’s Almanac or The Book of Lists. Speaking of 1979, here’s Stephen King talking about the appeal of the apocalypse in Danse Macabre (I’ll admit it: King’s red-state apocalypse The Stand was almost as big a deal to me in high school as The Invisibles was to me in grad school):
Much of the compulsion I felt while writing The Stand obviously came from envisioning an entire entrenched societal process destroyed at a stroke. I felt a bit like Alexander, lifting his sword over the Gordian knot and growling, ‘Fuck untying it; I’ve got a better way.’ … In this frame of mind, the destruction of THE WORLD AS WE KNOW IT became an actual relief. No more Ronald McDonald! No more Gong Show or Soap on TV—just soothing snow! No more terrorists! No more bullshit!
Pause for a moment to reflect that The Gong Show and Soap were once, apparently, arguments for the destruction of humanity. Today they’d be an improvement over many things on TV. (Come back, Chuck Barris, all is forgiven!) Setting that aside, who hasn’t felt like that, like the end of everything would be a kind of relief? Who hasn’t felt like that this week?
The lure of apocalypse is not just the thrill of destruction. It’s the dream of the blank slate. One stray comet, one deadly plague, one dolorous blow from the headless NUG-SHOHAB, and all our troubles will be over. Sure, the world will be a smoking ruin, but that term paper you have to write, those bills you have to pay, those intractable social problems that we just aren’t up to solving—they’ll all be made moot. Cosmic Do-Over. Tabula Rasa. Inbox Zero.
When I put this weblog in mothballs two years ago, I was feeling depressed about the internet, and all the ways in which it seemed to be falling short of what we’d hoped for it. I said it didn’t surprise and delight me any more. I still feel that way (see Anil Dash’s recent “The Web We Lost“), but I resist the urge to disown the optimism of the 1990s and early 2000s, no matter how naive or embarrassing it all seems in retrospect. That would let us all off the hook too easily for the things we didn’t accomplish. (For the record: the internet still surprises and delights me from time to time.)
Dreams of utopia and dreams of apocalypse are both ways of talking about the future—no, scratch that—they are ways of talking, if only obliquely, about a radically different present. In a society that seems to have given up trying to imagine anything much better than multinational capitalism, you need to sneak up on social criticism. Dress it up with rayguns or zombies. Set it in some freaky futuristic-sounding year like 2012.
That’s fine and all. But what are you supposed to do when the world doesn’t end? What do you do when the flying saucers don’t land, the Mayan star-demons don’t tear us to shreds, the Rapture comes and goes and God doesn’t take a single one of us? Or, what do you do when a revolutionary new technology rewires the world yet leaves all the power structures and patterns that predated it intact? How do you make your way and make a difference in a world that refuses to end, a world neither apocalypse nor utopia, a world where the slate will never come clean?
Toward the end of The Invisibles, one character tries to tell King Mob, the ass-kicking anarchist hero of the series, “Amid all the bangs and the drama and the grand passions, it’s kindness, and just ordinary goodness, that stands out in the end.” In fact, King Mob spends most of the comic shooting people, using time travel to hook up with women from different decades, and reminding everyone how cool he is. But Dane, resolutely unglamorous, saves the world just by being a good person. And the final issue finds Dane doing nothing more dramatic than giving comfort to a dying friend. I know which kind of heroism is more meaningful to me. This week especially.
The end of the Mayan calendar is not an apocalypse at all, of course. It’s just like Y2K–another big odometer rolling over. That can mean as much as or as little as you want it to. What’s the end-of-the-world equivalent of “think globally, act locally”? Think apocalyptically, act… ordinarily? Think as if the world is ending, act as if it isn’t.
I don’t know how many times I’ve read The Invisibles, but I dug out the final issue to write this post and I noticed one more thing I’d never seen before. The final issue is actually set on December 22, 2012. The day after the Mayapocalypse. Suck on that, fourteen-years-ago internet arguer.
The first line on the first page of the first issue of The Invisibles is:
From Melissa and Twitter a great visualization of London Lives on the Line. It shows life expectancy and poverty by the tube stops of London. It shows the rhetorical power of visualization to connect data to our lives.
Gartner has an interesting Hype Cycle Research methodology that is based on a visualization.
When new technologies make bold promises, how do you discern the hype from what’s commercially viable? And when will such claims pay off, if at all? Gartner Hype Cycles provide a graphic representation of the maturity and adoption of technologies and applications, and how they are potentially relevant to solving real business problems and exploiting new opportunities.
The method assumes a cycle that new technologies take from:
Here is an example from the Wikipedia:
I’m at Digital Humanities 2012 in Hamburg. I’m writing a conference report on philosophi.ca. The conference started with a keynote by Claudine Moulin that touched on research infrastructure. Moulin was the lead author of the European Science Foundation report on Research Infrastructure in the Humanities (link to my entry on this). She talked about the need for a cultural history of research infrastructure (which the report actually provides.) The humanities should not just import ideas and stories about infrastructure. We should use this infrastructure turn to help us understand the types of infrastructure we already have; we should think about the place of infrastructure in the humanities as humanists.
Susan pointed me to Pundit: A novel semantic web annotation tool. Pundit (which has a great domain name “thepund.it”) is an annotation tool that lets people create and share annotations on web materials. The annotations are triples that can be saved and linked into DBpedia and so on. I’m not sure I understand how it works entirely, but the demo is impressive. It could be the killer-app of semantic web technologies for the digital humanities.
The French have pulled the plug on Minitel, the videotex service that was introduced in 1982, 30 years ago. I remember seeing my first Minitel terminal in France where I lived briefly in 1982-83. I wish I could say I understood it at the time for what it was, but what struck me then was that it was a awkward replacement for the phonebook. Anyway, as of June 30th, Minitel is no more and France says farewell to the Minitel.
Minitel is important because it was the first large-scale information service. It turned out to not be a scalable and flexible as the web, but for a while it provided the French with all sorts of text services from directories to chat. It is famous for the messageries roses (pink messages) or adult chat services that emerged (and helped fund the system.)
In Canada Bell introduced in the late 1980s a version of Minitel called Alex (after Alexander Graham Bell) first in Quebec and then in Ontario. The service was too expensive and never took off. Thanks to a letter in today’s Globe I discovered that there were some interesting research and development into videotex services in Canada at the Canadian Research Communications Centre in the late 1970s and 1980s. Telidon was a “second generation” system that had true graphics, unlike Minitel.
Despite all sorts of interest and numerous experiments, videotex was never really successful outside of France/Minitel. It needs a lot of content for people to be willing to pay the price and the broadcast model of most trials meant that you didn’t have the community generation of content needed. Services like CompuServe that ran on PCs (instead of dedicated terminals) were successful where videotex was not, and ultimately the web wiped out even the services like Compuserve.
What is interesting, however, is how much interest and investment there was around the world in such services. The telecommunications industry clearly saw large-scale interactive information services as the future, but they were wedded to centralized models for how to try and evolve such a service. Only the French got the centralized model right by making it cheap, relatively open, and easy. That it lasted 30 years is an indication of how right Minitel was, even if the internet has replaced it.
The Digging Into Data program commissioned CLIR (Council on Library and Information Resources) to study and report on the first round of the programme. The report includes case studies on the 8 initial projects including one on our Criminal Intent project that is titled Using Zotero and TAPOR on the Old Bailey Proceedings: Data Mining with Criminal Intent (DMCI). More interesting are some of the reflections on big data and research in the humanities that the authors make:
1. One Culture. As the title hints, one of the conclusions is that in digital research the lines between disciplines and sectors have been blurred to the point where it is more accurate to say there is one culture of e-research. This is obviously a play on C. P. Snow’s Two Cultures. In big data that two cultures of the science and humanities, which have been alienated from each other for a century or two, are now coming back together around big data.
Rather than working in silos bounded by disciplinary methods, participants in this project have created a single culture of e-research that encompasses what have been called the e-sciences as well as the digital humanities: not a choice between the scientific and humanistic visions of the world, but a coherent amalgam of people and organizations embracing both. (p. 1)
2. Collaborate. A clear message of the report is that to do this sort of e-research people need to learn to collaborate and by that they don’t just mean learning to get along. They mean deliberate collaboration that is managed. I know our team had to consciously develop patterns of collaboration to get things done across 3 countries and many more universities. It also means collaborating across disciplines and this is where the “one culture” of the report is aspirational – something the report both announces and encourages. Without saying so, the report also serves as a warning that we could end up with a different polarization just as the separation of scientific and humanistic culture is healed. We could end up with polarization between those who work on big data (of any sort) using computational techniques and those who work with theory and criticism in the small. We could find humanists and scientists who use statistical and empirical methods in one culture while humanists and scientists who use theory and modelling gather as a different culture. One culture always spawns two and so on.
3. Expand Concepts. The recommendations push the idea that all sorts of people/stakeholders need to expand their ideas about research. We need to expand our ideas about what constitutes research evidence, what constitutes research activity, what constitutes research deliverables and who should be doing research in what configurations. The humanities and other interpretative fields should stop thinking of research as a process that turns the reading of books and articles into the writing of more books and articles. The new scale of data calls for a new scale of concepts and a new scale of organization.
It is interesting how this report follows the creation of the Digging Into Data program. It is a validation of the act of creating the programme and creating it as it was. The funding agencies, led by Brett Bobley, ran a consultation and then gambled on a programme designed to encourage and foreground certain types of research. By and large their design had the effect they wanted. To some extent CLIR reports that research is becoming what Digging encouraged us to think it should be. Digging took seriously Greg Crane’s question, “what can you do with a million books”, but they abstracted it to “what can you do with gigabytes of data?” and created incentives (funding) to get us to come up with compelling examples, which in turn legitimize the program’s hypothesis that this is important.
In other words we should acknowledge and respect the politics of granting. Digging set out to create the conditions where a certain type of research thrived and got attention. The first round of the programme was, for this reason, widely advertised, heavily promoted, and now carefully studied and reported on. All the teams had to participate in a small conference in Washington that got significant press coverage. Digging is an example of how granting councils can be creative and change the research culture.
The Digging into Data Challenge presents us with a new paradigm: a digital ecology of data, algorithms, metadata, analytical and visualization tools, and new forms of scholarly expression that result from this research. The implications of these projects and their digital milieu for the economics and management of higher education, as well as for the practices of research, teaching, and learning, are profound, not only for researchers engaged in computationally intensive work but also for college and university administrations, scholarly societies, funding agencies, research libraries, academic publishers, and students. (p. 2)
The word “presents” can mean many things here. The new paradigm is both a creation of the programme and a result of changes in the research environment. The very presentation of research is changed by the scale of data. Visualizations replace quotations as the favored way into the data. And, of course, granting councils commission reports that re-present a heady mix of new paradigms and case studies.
A couple of weeks ago I gave a talk at Digital Infrastructure Summit 2012 which was hosted by the Canadian University Council of Chief Information Officers (CUCCIO). This short conference was very different from any other I’ve been at. CUCCIO, by its nature, is a group of people (university CIOs) who are used to doing things. They seemed committed to defining a common research infrastructure for Canadian universities and trying to prototype it. It seemed all the right people were there to start moving in the same direction.
For this talk I prepared a set of questions for auditing whether a university has good support for digital research in the humanities. See Check IT Out!. The idea is that anyone from a researcher to an administrator can use these questions to check out the IT support for humanists.