I love the open, freewheeling conversations commonly found at THATCamps, but I sometimes wish that some sessions were more grounded in specificity–and that participants could get CV-worthy credit for leading them. At the Texas Digital Humanities Consortium’s May 27 mini-conference, we aim to mashup the best of THATCamp and traditional conferences: to provide a forum where a researcher or group of researchers will present their work for 15 minutes and then lead the participants in discussion or experimentation inspired by the presentation for the rest of the hour. We hope that this hybrid approach will give presenters the opportunity to share their work, get credit for it, and receive feedback on it and participants to explore issues raised by the session and generate new insights. This approach resembles one of my favorite class formats: begin with a brief lecture to establish the context, then launch into a dynamic discussion to allow for deeper exploration. For example, presenters might discuss a project to create a digital audio archive, then facilitate a discussion about challenges such as annotation and digital preservation. Or a session might focus on a GIS project to map patterns of oppression in a particular region, opening up into a conversation about how to deal with uncertainty in data and include the perspectives of oppressed communities. We’re open to a variety of approaches. All proposals will undergo peer review, which will ensure the quality of the conference. Please see the CFP at https://conferences.tdl.org/tcdl/index.php/TCDL/index/pages/view/txdhc
The Texas Digital Humanities Consortium is organizing this mini-conference in collaboration with the fine folks at the Texas Conference on Digital Libraries (TCDL); it will be held immediately after TCDL at the Commons Learning Center on the J.J. Pickle Research Campus in Austin, Texas. We intend to keep the mini-conference to about 50 registrants, which should allow for rich conversation and networking. Through the event, we hope to deepen connections among scholars, librarians, cultural heritage professionals, technologists and graduate students.
The deadline for proposals is coming up soon on February 12, 2016 (note the new deadline). Feel free to send any questions to email@example.com, and please help spread the word about the event. We look forward to some terrific proposals.
[cross-posted to TXDHC]
One of the most famous images from the dawn of the nuclear era is back in the news: it is no longer seven minutes to midnight, but five, according to the board of directors of the Bulletin of the Atomic Scientists, who announced that they were moving the hands of their famed "Doomsday Clock" closer to Armageddon. The "Doomsday Clock" first made its appearance on the cover of the Bulletin in June of 1947, a kind of visual shorthand that expressed the anxiety of many nuclear scientists about the arms race that had made the world a more dangerous place through scientific progress.
In the last 60 years the hands of the timepiece now have been moved back and forth a total of eighteen times -- the extremes of the timeline have been when the hands of the clock stood at two minutes to midnight in 1953, after the Soviet Union had followed the United States in successfully testing a new level of nuclear weaponry, the hydrogen bomb; at the other end, in 1991, the hands then slipped below the fatal last quarter, when they retreated to seventeen minutes to the final hour, due to the end of the Cold War and movement toward disarmament through the Strategic Arms Reduction Treaty.
It's always news when the Bulletin changes the clock's timing, but there was an additional news hook in this 2007 decision: the increasing threat to world survival was pegged as coming not only from nuclear events, but from such phenomena as global warming. As reported in the Chicago Tribune -- "Doomsday Clock to Start New Era" (Jeremy Manier, 1.17.07) --
. . . when the Chicago-based Bulletin of the Atomic Scientists unveils the first change to the Doomsday Clock in four years, the risk of a nuclear holocaust will be just one among many threats that nudge the position of the clock's portentous minute hand. The keepers of the clock have expanded its purview to include the threat of global warming, the genetic engineering of diseases and other "threats to global survival."
It may be a stretch to put nuclear weapons and climate change in the same category, but that's one way the organization is trying to keep its 60-year-old clock relevant at a time when bioterrorism and radical groups can threaten the largest nations.
Indeed, this novel aspect of the nuclear experts reaching beyond the mushroom cloud to anoint climate change as a comparable danger, was duly noted and clearly highlighted by most outlets, as in this Canadian Broadcasting Corporation news story ("The Doomsday Clock Advances Two Minutes" 1.17.07):
Add a new crop of countries dazzled by nuclear technology to other global threats such as climate change and environmental degradation and the result, according to the Bulletin of Atomic Scientists, is almost toxic.
"We stand at the brink of a second nuclear age," the board said in a statement.
The move from seven to five minutes from midnight was decided upon after scientists reviewed the current nuclear situation in combination with expected climate change, marking the first time the Doomsday Clock has ever reflected a separate world threat in addition to the bomb.
Even if, as Chicago Sun-Times columnist Neil Steinberg remarked, "The Bulletin of the Atomic Scientists' Doomsday Clock has to be one of the most successful magazine public relations gimmicks of all time, right up there with Time's Person of the Year and the Sports Illustrated swimsuit issue" (1.17.07), the roll-out of the 2007 model was newer, bigger, and better, apocalyptically-speaking. Even still-newsworthy icons need a brush-up, it seems, whether design-wise, or content-wise, to garner sufficient attention. An added kick was gained by bypassing the traditional site for Doomsday announcements: as noted by the Chicago Tribune, "in an added bid to influence policymakers and draw an international audience, the Bulletin is moving this year's announcement from its customary place in Chicago to a dual event held in London and Washington."
The bi-lateral press events did indeed seem to generate substantial coverage in the English language world, but even with all the "doomsday clock enters a new era" emphases, it seemed to me as if the stories would have fit relatively easily within the past world of a bygone time. Yes, the emphasis on climate science was new, but the key educational lesson seemed to fit comfortably within the venerable scientific organizational chart that places nuclear physics at the top, with what physicists have to say counting for more than the words of scientists from other disciplines -- there was a literal sense in which physicists were speaking for their other colleagues, graciously deigning to share their authority and the stage (metaphorically at least).
I found most fascinating the pictures of theoretical physicist Stephen Hawking from the London event, where photographers sought to couple Hawking the icon with the Bulletin's icon. Climate science may have been the newsworthy angle, but physics as arbiter was definitely a controlling visual metaphor. The photo at the left is one version, with the clock floating above somewhat like a heavenly image of doom; at the right is a different take, which very nearly manages to juxtapose the two, tightly framing the machine-bound thinker and the message that we have but five minutes of future to go before time expires and our brief history along with it. The third photograph, which accompanied an online bbc news article ("Climate Resets the 'Doomsday Clock' " by Molly Bentley, 1.17.07) manages to get the shot that everyone must have been after, whether conscious of it or not: the physicist's face and the timepiece's face, melded together in a doubly powerful dose of symbolism, his head held at nearly the same angle of incidence (so to speak) as the minute hand as it closes the gap counting down to the zero hour, literally overshadowing the scientific mind in the foreground.
Rather like nuclear physicist announcements of decades past, men appeared to dominate the photographic spotlight, whether through pictures of Hawking from London or by pulling old file photos featuring a male hand on the clock (for example, to the left; from Alaska Report, using a Reuters file image). In Washington, Bulletin Executive Editor and political scientist Kennette Benedict was also part of the stage presence, along with Ambassador Thomas Pickering and physicist Lawrence Krauss. These pictures tended to feature her rather awkwardly, as with this one that peers at her off in the distance fussing with unveiling the new time, with the men looking on as she finishes with the stagecraft. It looks somewhat like every tedious office meeting with middle management that you've ever had to sit through as they fuss with the flow charts. It just doesn't have the same authoritative impact as the others, diffusing the visual warning that the end of the world is nigh.
But the black-and-white analog 1950s feel to this news event also stems from the endless reiteration of the "doomsday" theme. Now the idea of doomsday has a long lineage -- one of my favorite examinations of the cultural resonance of this theme is Daniel Wojcik's The End of the World as We Know It: Faith, Fatalism, and Apocalypse in America (which includes discussions of secular apocalyptic themes in the nuclear era as well), and of course the idea of doomsday stretches back millennia -- but in the years after World War II, the growing awareness of the unprecedented destructive power created through atomic science -- especially with the H-bomb -- gave the doomsday scenario a new grasp on life (so to speak). As Wojcik argues:
The concept of a meaningless apocalypse brought about by human or natural causes is a relatively recent phenomenon, differing dramatically from religious apocalyptic cosmologies. Instead of faith in a redemptive new realm to be established after the present world is annihilated, secular doomsday visions are usually characterized by a sense of pessimism, absurdity, and nihilism. (p. 97)
The Doomsday Clock was an apt image for scientists to reach for in a Doomsday world circa 1947 / 1953 in which scientists saw it as their responsibility to blast the populace (and the policy-makers) out of what they saw as a complacent response of willful ignorance in the face of daily emergency; to the extent that scientists still address the public in such stark and urgent terms when informing them of scientific opinion on matters such as nuclear proliferation or global warming, then the Doomsday Clock certainly remains a relevant symbol. But if the Doomsday Clock is an accurate visual shorthand for the longer, more complex scientific arguments that undergird it, this does not necessarily mean it is (or was?) an effective communication device, in terms, at least, of engaging the public in a meaningful discussion of risk assessment, scientific expertise, political realities, and democratic decision-making.
A few years back I opened a discussion with the students in my history of modern science course about the continuing relevance of nuclear issues as a political matter by taking them through the timeline of the Doomsday Clock, and asking them to draw a picture of their own clock, and then write about what they thought the time should be and why. I was surprised to learn that many students resented what they saw as the manipulative nature of physicists choosing the last 15 minutes before midnight as their starting point. Many of them argued for placing the hands at 9:00 or 10:00 or 11:00 -- not because they were insisting that nuclear weapons were of little importance, but because they believed that their own starting points placed more faith in the power of human beings to maneuver within difficult straits. It might still be night, but we had been pushing back against the darkness and we were not at the last gasps before a total loss of control, of options, of hope. They were looking to be empowered, not diminished, as a motivation toward action.
In the eyes of the Bulletin scientists, no doubt my students would seem naive in rejecting the "minutes to midnight" framework. The Bulletin has an incredible amount of international political experience at their fingertips and intellectual mindpower at their disposal -- as the Bulletin's press release notes, the decision of the "BAS Board of Directors was made in consultation with the Bulletin’s Board of Sponsors, which includes 18 Nobel Laureates." It is true that there were no Nobel Laureates on my class roll that year. But I believe that these students were articulating an important reality, one that places the thinking of their generation at odds with the cold war mechanics out of which the "Doomsday Clock" is constructed, and where the "two cultures" norm holds sway [the expression itself a cold war era contribution by C.P. Snow]: brilliant scientific minds needing to get the attention of inattentive or lesser minds (such as those with a shaky grasp of the second law of thermodynamics as Snow suggested) by prophesying immediate doom. In a recent article, the Bulletin of the Atomic Scientists called their symbol "The People's Clock." After listening to my students, I don't think I would agree.
In his book Mad, Bad and Dangerous? The Scientist and the Cinema, author Christopher Frayling contends that:
Up until quite recently, real-life senior scientists have tended to present themselves like bewigged judges in court -- remote, out of touch, unconsultative, much given to pontificating and immune from criticism. And senior scientists have wondered why the public does not follow them every step of the way! Now there is much more consultation and much more emphasis on communications skills, but these tend to be confined to set-piece platforms or media debates in which the rhetoric of horror films -- on both sides -- takes over from serious discussion. 'Seeing into the mind of God' or 'destroying the planet' or 'my statistics are better than your statistics' or dismissive comments about lay people in the name of public understanding of science, tend to be the resulting headlines. (p. 226)
It is easier to re-animate old patterns of discourse, rather than to try, in a later phrase of Frayling's, to "break the flow" and find new forms of engagement. But if the public is truly to be a partner in a scientific conversation about pressing issues, then new strategies of discursive detente need to be deployed. In fact it may be time -- it may be past time -- to do so.
For more: The Jan/Feb 2007 issue of the Bulletin of the Atomic Scientists has a very nice two-page layout on the history of the clock (even if I take exception with the title of the article), including a reminiscence from the artist, Martyl, who first created the image. There's an interesting historical artifact from Time Magazine online: a 1964 article entitled "Turning Back the Clock," which states that since "now there is less concern about Armageddon and less shock value to the power of the atom, the clock is ticking mostly for the Bulletin. Its funds low, the magazine is once more passing the hat." And speaking of whether or not the clock is outdated, Dood Abides at Unconfirmed Sources plays with the file photo of the Doomsday Clock to present, a new, shiny digital version for the 21st century :-) For more of Stephen Hawking's dire pronouncements about the fate of the human race, see "Prophet of Doomsday: Stephen Hawking, Eco-Warrior" by Geoffrey Lean in the Independent, 1.27.07. For an interesting undergraduate conversation by students from different majors about the "two cultures" idea, see this panel discussion, "The Two Cultures: Students Speak their Minds," from the University of Colorado.
Images: The very first image is from the homepage of the Bulletin, at http://www.thebulletin.org/; the original 1947 cover is from the Los Alamos National Laboratory Research Library, online at http://library.lanl.gov/libinfo/news/images/BulletinAS-cover.jpg. The first Hawking image is from the Telegraph ("Hawking: Doomsday Clock Closer to Midnight" 1.18.07) at http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2007/01/17/nclock117.xml; the second Hawking image is from the CBC article at http://www.cbc.ca/news/background/nuclearweapons/doomsday-clock.html; and the third image from the BBC is at http://news.bbc.co.uk/1/hi/sci/tech/6270871.stm. The file photo shown on the Alaska Report is at http://www.alaskareport.com/reu77351.htm while the trio photo from the DC press conference was carried on an msnbc.com article "Doomsday Clock Moves Closer to Midnight" http://www.msnbc.msn.com/id/16670369/.
GE’s new Middle East Aviation Technology Center opened in November 2015 in the Dubai Airport Free Zone area. There are ten Ideum Drafting Table 46s and one 100 inch Pano table featured in the high-tech facility that shows off GE’s big data platform, Prefix. The center provides a collaborative environment for GE engineers and customers to explore the aviation ecosystem. Our unique multitouch tables support the collaborative direction of the center which also serves as a regional location for customer and product support.
We worked closely with 2.0 Concepts , our Middle East and UK partner, to coordinate the placement of our hardware at this state-of-the-art site.
Ideum is adding an additional 5,000 square feet of production space. This new addition will allow us to expand the production of our popular 55″ multitouch tables and touch walls. Our interactive displays are the only ones available in 4K UHD with 3M projected capacitive touch technology. The new space will come on line in February and coincide with the release of a new 65″ 3M display that will be available through our entire touch table product line. The “official” announcement of the new 65″ display will happen toward the end of this month!
The new production space will not only allow us to increase the production of our popular 3M interactive touch displays, but it will also allow us to expand our dedicated QA testing area and improve support. Back in the summer of 2014, we added the Prototyping and Usability Lab into mix. The products that were designed there are now fueling this next phase of expansion.
This photo from last week shows the southwest corner of the new production studio space completely gutted. The new floor and ceiling are going in soon.
The space in November as the demolition was underway.
We will be hosting an open house in February once the space comes on line. We will post more details as the remodeling continues.
Approaches to digital exhibits in museums and public spaces have dramatically evolved as technology and techniques have improved. Just a decade ago, a 19- or 21-inch kiosk presenting text, images, and video was the standard. Today, immersive spaces, large displays, multitouch tables, motion recognition exhibits, and other impactful technologies are changing the ways in which visitors interact in public spaces. Along with the technological improvements (and the associated reduction in costs), design approaches have changed, making digital exhibits much more social, interactive, and engaging.
Having developed digital exhibits for more than 20 years now, first at the Exploratorium and for the last 16 years at Ideum, it has been exciting to be both a witness to and a participant in this dramatic change. I’ve never been more excited about the work that is being done and possibilities that exist for development in the near future.
Here at Ideum, to better understand where digital exhibits are headed, we’ve classified the types of experiences we are involved in developing: Media Rich Interactives & Storytelling, Games, Creative Applications, and Mixed Reality Experiences. This isn’t a complete list and we are not including Web, mobile, or wayfinding applications here (which can share some of these qualities and with which we also have some experience). This article focuses on experiential types of digital exhibits and begins to define different visitor experiences and outcomes. We’ve included some of the applications that we’ve developed for the Sprout by HP platform too, as they reflect the types of interactive experiences we should see more of in gallery spaces in the near future.
Media Rich Interactives & Storytelling
The most common types of interactive digital exhibits in museums present images, text, and video. Many of the projects we began developing in the early 2000s fall into this category of digital exhibits. These types of exhibits can have media organized by theme or category and can include connection to digital museum collections. Media elements can be presented on timelines, or maps, or organized in other ways to provide additional context. Media-rich interactives can be vehicles for storytelling, which has been a mainstay of digital exhibits in museums for years.
In the last year, we’ve developed a few applications that would fall into this category. Here are a few examples.
Frank Lloyd Wright Homes with Crystal Bridges Museum of American Art – Visitors explore the residential architecture of Frank Lloyd Wright. Large photographs and short descriptions are presented on custom-built 34” ultrawide ultra-high resolution touch monitor.
Faces to Go With Names with The Sullivan Brothers Iowa Veterans Museum – A moving digital memorial to the 853 people from Iowa who died in the Vietnam war. Photographs and biographical information can be browsed or searched in a variety of ways. The exhibit is displayed on a 65” 4K UHD vertically-hung touch monitor.
The Great Inka Road with the Smithsonian National Museum of the American Indian – Perhaps the most elaborate media-rich exhibit we’ve ever worked on. Multiple visitors interact with a 3D reconstruction of the Inka capital of Cusco circa 1531. Points of interest bring up media elements and two small digital stations on the table allow visitors to “tour” the 3D city via a bird’s eye perspective. The exhibit runs on a massive 84” Colossus multitouch table.
Large-format multitouch tables and touch walls can be impactful when used in gaming and can also, depending on the design, accommodate multiple players. Compared to a decade ago, the programming and design expertise is more widely available and software tools such as Unity3D are well-developed and easier to use than their predecessors.
Foosball with Coca-Cola – A 3D digital version of the classic tabletop game. We designed the game play to closely mirror the physical game using a multitouch table. The players keep track of the score, not the computer. The handles work much like the physical ones on real foosball tables. These attributes make the game more open-ended and accessible. It’s not educational, but rather a branding experience. The exhibit runs on 46” Platform touch table.
Be a Bug with the Albuquerque BioPark – A single-user Kinect and touch-based exhibit. Visitors use their whole body to control 3D bugs that pollinate flowers, eat fruit, and hunt other bugs. The exhibit uses a vertically hung 65″ Presenter touch wall; hanging the monitor in this way increases the size of the bugs as they are presented in the scene. The unique orientation of the display and its ultra high-resolution provides visitors with a scene that is somewhat different than a home gaming setup.
With these types of exhibits, the user must take parts and pieces and actively make something. These exhibits often involve some kind of capacity to share the finished work. They can often combine additional content or educational themes that are explored during the process of creation. These exhibits usually have longer dwell times, they can also be more difficult to design since they are task-oriented, visitors not only need to navigate but they also need to use the creation tools.
We all can learn by doing, and having active and participatory experiences can help museums meet their educational goals. Most of the examples we’ve worked on this year have been part of art exhibitions, where visitors have deepened their understanding and appreciation for the artwork, the artists, and their techniques through active participation. However, one standout example involves collaboratively creating a space station!
Warhol and Wyeth Photobooth with Crystal Bridges Museum of American Art – Visitors take their own picture and create a stylized portrait. Visitors learn about the different styles of the two artists and portraiture. The application uses a large touch monitor and an Intel RealSense camera to automatically “green screen” visitors, removing the background from the portraits they take. Visitors can share their creations via email and Tumblr.
Textile Maker with The Institute of American Indian Arts (IAIA) - Visitors choose fabric, colors, stamps, and patterns and make their own colorful textiles in the style of Lloyd Kiva New. The textiles are projected at a large scale and can also be shared via Tumblr.
Landscapes Carry Meaning with Crystal Bridges Museum of American Art – Different background scenes are presented and visitors place and scale various objects and elements to construct a landscape painting. Visitors assign meaning to the objects placed, developing a description to match their painting. Visitors can share their creations via email and Tumblr.
Design It! Build a Space Station with Smithsonian National Air and Space Museum – In this re-envisioning of an older application, visitors work at 6 stations on the sides of a large-scale multitouch table. Visitors design modules for a space station while staying within budget; each of the individually-created modules are added to a space station in the center of the table. The visitors can email themselves details about the space station they created collaboratively with the other people at the table.
We have also been working a lot in the past year on mixed-reality exhibits and activities. These exhibits include some kind of tangible or real world object(s) in combination with a digital experience. Mixed-reality applications are compelling by nature as they juxtapose common objects with new digital interfaces. They can extend learning by using real objects as catalysts for new information or activities.
While the examples shown here are on the Sprout by HP platform or show our touch tables with own proprietary fiducial recognition system, we are already working on the next generation of these types of applications for public spaces.
Bills & Coins for the Sprout by HP – Bills & Coins uses the built-in camera system of the Sprout by HP and an object recognition system that we developed to automatically recognize currency from around the world. Visitors learn about currency rates, the history, and the geography of the countries that produce the currency. In addition, the application has virtual currency available allowing users to explore currency whether they have the actual money in hand or not.
Origami Apprentice for the Sprout by HP – Another application that takes advantage of unique Sprout platform, Origami Apprentice uses both the main screen and projected desktop screen to show users step by step how to fold and create origami.
Office of the Future - An experimental project, we developed a method for recognizing fiducials using projected capacitive touch monitors. In the video, we show how these markers could help blend real objects with digital ones. While this was developed in 2014, we further pursued this technology in 2015 and we have our first commercial release using this technology later this month!
It is helpful to think about the types of visitor experiences that digital technology can enable, but these are not hard categories and projects can fall into more than one of the the classifications presented. For example, mixed-reality applications can blend storytelling or even gaming. In addition, these categories can change dramatically as new techniques and technologies emerge and rapid change is coming.
The field of large-scale digital exhibit development is beginning to undergo a transformation as those who manage retail and other commercial public spaces are seeing the value in creating multiuser digital experiences. Having more design firms enter this space will help move design methods along. Although the pace will be slower, this transformation will happen much in the same way that Web and mobile design rapidly improved as many more firms got involved and as the audience for these experiences grew.
Later in 2016, we will be releasing some first-of-their-kind visitor experiences, some of them designed for high-profile commercial spaces. We are working on new mixed-reality installations and we are also looking at blending fiducials and tangible objects, motion detection, sensor input, RFID, NFC, and other technologies with our large multitouch tables and touch walls. Think of these large interactive displays as the hub of a larger experience and you can imagine what we have in mind. We are excited at the prospect of sharing these new experiences with you later this year.
This is what we know: On November 24, 2015, the Wu-Tang Clan sold its latest album, Once Upon a Time in Shaolin, through an online auction house. As one of the most innovative rap groups, the Wu-Tang Clan had used concepts for their recordings before, but the latest album would be their highest concept: it would exist as only one copy—as an LP, that physical, authentic format for music—encased in an artisanally crafted box. This album would have only one owner, and thus, perhaps, only one listener. By legal agreement, the owner would not be allowed to distribute it commercially until 88 years from now.
Once—note the singularity at the beginning of the album’s title—was purchased for $2 million by Martin Shkreli, a young man who was an unsuccessful hedge fund manager and then an unscrupulous drug company executive. This career arc was more than enough to make him filthy rich by age 30.
Then, in one of 2015’s greatest moments of schadenfreude, especially for those who care about the widespread availability of quality healthcare and hip hop, Shkreli was arrested by the FBI for fraud. Alas, the FBI left Once Upon a Time in Shaolin in Shkreli’s New York apartment.
Presumably, the album continues to sit there, in the shadows, unplayed. It may very well gather dust for some time.
This has made many people unhappy, and some have hatched schemes to retrieve Once, ideally using the martial arts the Shaolin monks are known for. But our obsession with possessing the album has prevented us from contemplating the nature of the album—its existence—which is what the Buddhists of Shaolin would, after all, prefer us to do.
RZA, the leader of the Wu-Tang Clan, had tried to forewarn us. As he told Forbes, “We’re about to put out a piece of art like nobody else has done in the history of music…This is like someone having the scepter of an Egyptian king.”
Many have sought ways that the public might listen to Once, but few have taken RZA at his word. What if Once Upon a Time in Shaolin is meant primarily as art, as a precious artifact that only one person, like a king, can hold? And if we consider this question, do we really need to listen to the album to hear what it’s saying?
* * *
In 1995, the Chinese artist Ai Weiwei took an ancient, priceless Han Dynasty vase and dropped it onto a brick floor. It instantly shattered. He took a series of high-speed photographs of the vase drop, which he assembled into a triptych; in the middle photograph the vase seems like it’s in a levitating, suspended state. It exists, but it is milliseconds from not existing. It is forever there, whole, and yet we know it is forever in pieces.
He shouldn’t have destroyed that singular vase, you may be thinking. You must think more deeply, and enter the Shaolin temple of your mind.
* * *
In the old mill town of North Adams, Massachusetts, a cluster of nineteenth-century factory buildings has been converted into the largest museum of contemporary art in the United States: Mass MoCA. One entire building, from top to bottom, is dedicated to the work of Sol LeWitt.
Sol LeWitt is an unusual artist in that he rarely painted, drew, or sculpted the art you see by him. Instead, he wrote out instructions for artwork, and then left it to “constructors”—often art students, museum curators, or others, to do the actual work of fabrication. LeWitt liked to be a recipe writer, not a chef.
“Wall Drawing 1180: Within a circle draw 10,000 straight black lines and 10,000 black not straight lines. All lines are randomly spaced and equally distributed.”
Somehow, incredibly, this ends up looking like a massive picture from the Hubble Telescope: an infinite field of stars emerges after weeks of drawing thousands of squiggly and straight lines with a pencil.
Sixty-five art students and artists, none of them Sol LeWitt, made the Sol LeWitt exhibit, and it is one of the most beautiful things you’ll ever see. The patterns, the colors, the way that LeWitt’s often deceptively simple recipes result in a sumptuous banquet for the eyes, is remarkable.
But the exhibit will only last for 25 years—eight of which have already ticked by—after which the museum will paint over all of the art. Touring the exhibit, you can’t help but think about this endtime: All of this beauty, and yet on some Monday morning in the not-really-that-distant future some guy with a 5-gallon bucket of white paint from Home Depot and a wide roller brush on the end of a long wood handle will cover those walls forever. Will he sigh before making the first stroke?
Until that Monday morning in 2033, the Sol LeWitt exhibit exists. You have 17 years remaining, but time moves more quickly than we like, doesn’t it? I have told you to see it, but will you make the trip to North Adams? Right now, for those who have not seen it, it’s Ai Weiwei’s Han vase in mid-drop. It’s just that the gravity is lighter, the fall slower. But the third photograph, the smashed pieces, is coming.
Do you fear the loss of that magical field of stars and scores of other wall-sized artworks? Or have you closed your eyes, meditated, and concluded: Even if I never get to North Adams, LeWitt’s recipes will still exist, and they are the true art.
* * *
In 2008, as Mass MoCA was constructing the Sol Lewitt exhibit, they also hosted an exhibit of the art of Spencer Finch. Finch was fascinated by Emily Dickinson, and wished to recreate the moments in which she looked out of her window, thinking and writing poetry. Could these ephemeral views be recaptured, made physical for us so many years later?
“Sunlight in an Empty Room (Passing Cloud for Emily Dickinson, Amherst, MA, August 28, 2004),” tried to do so. Finch used lighting and light filters to make a cloud of just the right wavelengths that Dickinson would have seen outside of her bedroom on a particular day.
You cannot capture a moment, you mutter softly, waving your hand, nor Emily Dickinson’s thoughts.
* * *
Open your favorite streaming music app, and search for the blockbuster 2013 song “Get Lucky.”
Make a playlist that includes the original Daft Punk version, which should come up as the first hit, but also add to the list three other covers of the song by artists you have never heard of, which you will find by scrolling down the search results page.
These versions exist because of something called a “compulsory license,” which means that by paying a defined fee to an agency, you are allowed to record a cover song without asking for, or receiving, permission from the artist who wrote it. The song becomes a recipe and you become the constructor.
Now visit a friend. Play the “Get Lucky” playlist on shuffle mode. When all four songs have been played, ask your friend to identify the original version. The guitar and bass and singing will sound surprisingly similar in each version. Your friend will probably ask, increasingly frantically: “Which is the one true song?”
Do not answer. Thank your friend, bow, and leave.
* * *
“Get Lucky” was co-written by Nile Rodgers, the mastermind behind some of the greatest pop music of the last 40 years, starting with Chic, the disco band that gave us infectious dance hits like “Good Times.” Shortly after “Good Times” was released as a single, the enterprising music producer Sylvia Robinson brought a funk band into a recording studio and had them copy Chic’s bassist Bernard Edwards’ memorable bass line from that song. She also sampled its string section. Adding some rappers no one had ever heard of before, she created “Rapper’s Delight,” which seemed laughable to those who really knew the inventive, emerging hip hop scene, but which rather effectively set rap music on a course for mainstream (and white) popularity.
Rodgers initially hated “Rapper’s Delight,” believing it was a wholesale copy of “Good Times,” and he and Edwards sued for copyright violation. Later, after he won and was listed as a co-writer of the song, he declared himself proud of “Rapper’s Delight.” He realized it was a brilliant theft that changed pop music forever, and yet didn’t diminish Chic’s original work.
“Rapper’s Delight” was far from the only hip hop song to borrow; in fact, the reuse of older recordings was standard within the new genre, and part of its enormous creativity. The technique reached its apogee in arguably the three seminal rap albums of the late 1980s: Public Enemy’s It Takes a Nation of Millions to Hold Us Back, De La Soul’s 3 Feet High and Rising, and Beastie Boys’ Paul’s Boutique. Each of these albums had over a hundred samples, mixing and matching from different genres to make sounds that were totally new.
They were large, you nod, they contained multitudes.
* * *
In 1992, the science fiction author William Gibson, who had coined the word “cyberspace,” released a new work entitled Agrippa (A Book of the Dead). The text was issued, most famously, in a deluxe edition on a 3.5” floppy disk encased in an artisanally crafted box. The disk would encrypt itself upon a single reading, so you only had one shot to read the text as it scrolled across your screen.
This Agrippa cost $2000, and only a very small number were made. Gibson publicly revelled in the work’s combination of the ephemeral and the valuable. He loved that the book, after viewing, would become like a television tuned to a dead channel.
Almost immediately, however, the text of Agrippa was surreptitiously released on an underground electronic bulletin board called MindVox. Anyone can now read it online, and view the deluxe packaging as well.
What is the nature of art, you consider, without its packaging? What is its value?
* * *
The British artist Damien Hirst is probably best known for putting a dead shark in a large tank of formaldehyde and giving it the existential title “The Physical Impossibility of Death in the Mind of Someone Living.” In 2007, he asked the jewelers who fabricate items for the British monarchy—scepters for the king—to make a human skull out of diamonds and platinum, based on a real skull he bought. The skull’s teeth were added to the final product. Hirst called this artwork “For the Love of God.” Many critics called it “tacky.”
But “For the Love of God” was as much an exercise in the finance that goes along with the contemporary art scene, where prices for works regularly head into eight or even nine figures at auction. The fabrication of the skull apparently cost £14 million, and Hirst tried to sell the bejeweled skull to bidders for £50 million. Although there were rumors of a sale, ultimately there were no takers. A mysterious consortium then evidently bought the skull, but for less than £50 million, perhaps much less, and oddly, Hirst seemed to be one of the investors. Some analysts believe that Hirst actually lost money on the deal.
Once Upon a Time in Shaolin was also rumored to be for sale for a much higher number, perhaps as much as $5 million, but Shkreli ultimately bought it for $2 million, which is far less than the Wu-Tang Clan would make from a regular album release.
* * *
What is Once Upon a Time in Shaolin really worth? Is its scarcity its worth, and its worth its true art and value?
Once Upon a Time in Shaolin may not be as scarce as we imagine. It surely exists beyond the sole copy in Martin Shkreli’s apartment. It exists in the sense that members of Wu-Tang created it and still have its music in their heads and could likely recreate it if they wanted. Perhaps RZA is humming some of the songs in his shower right now. It exists as a recipe.
But it may also exist in actuality, albeit in pieces, like the wisps of a cloud. The master recordings may have been destroyed, but the way that digital recording works mean that elements of Once existed more than once on magnetic media and probably, somewhere, continue to exist regardless of what Wu-Tang Clan has done with the completed master. Parts of the album can probably be dug up, like the scepter of an Egyptian king, or the disappearing poetry on a phosphorus screen.
If samples were used, they exist on other recordings; if a drum machine was used, those beats exist, identically, on many other machines. Any computers involved surely have files that have not been truly erased, and that could be dug up by digital archaeologists. There may be assembly to be done, and perhaps the final product would be different from the “original.” Or would it?
And perhaps too many traces of the full Once Upon a Time in Shaolin exist for it not to leak, just as Agrippa did.
Of course, then it will just be another stream of bits among the countless streams in our ephemeral era, severed from its unique packaging. It will take its place on millions of playlists, its songs sitting alongside tens of millions of other songs.
We will have gained something from Once’s liberation, but then we will have lost something as well.
* * *
The abstract artist Ellsworth Kelly, who recently died, was once asked about the nature of art. “I think what we all want from art is a sense of fixity, a sense of opposing the chaos of daily living,” he said, with more than a bit of Shaolin wisdom. “This is an illusion, of course.”
In the 2015 installment of the Digital Campus Year in Review podcast, regulars Dan Cohen, Amanda French, Tom Scheinfeldt, and Stephen Robertson look back at 2015 and predict the big news of 2016. Cheers went out to the NEH/Mellon Humanities Open Book Program, Congress (c.1965), the retirement of James Billington as Librarian of Congress, and the US Court of Appeals decision in favor of Google Books. Eliciting jeers were the Ad-blocker controversy, the behavior of Proquest (with Amanda dissenting), and the news that Jennifer Howard has left the higher education beat.
Much of what the group predicted for 2015 came to pass, to some extent: universities were hacked; SHARE developed; the push to learn to code continued; and Proquest and Gale moved to provide data mining access to their collections (at considerable additional cost to libraries). And, with the FAA moving to require that drones be registered, Mills’s prediction from 2013 that an Amazon drone will be shot down over Texas looks ever more likely. If you are impressed by those predictions, then in 2016 you should expect the Wu Tang Clan album to leak, Virtual Reality MOOCs to be launched, a digital humanist to win a Macarthur Fellowship, hypothes.is not to take off (or to enjoy the same success as DPLA), and emojis to replace text as our primary form of communication.
Running time: 59:23
Download the .mp3
In my second post in this series I took on my colleague Steve Pearlstein‘s argument that “universities” should engage in less research, more teaching. In this final post in the series, I want to take up his argument about general education.
Cheaper, better general education. The reform of general education is something I’ve had a lot to say about in this blog over the years, for example: 2006; 2008; and 2008; and again in 2008; and 2010, just to highlight a few of my more agitated posts. So, I agree with Pearlstein that it’s time to take an axe to general education requirements at many universities (not all, just many, and especially mine). But where I have a problem with his argument is when he says the following:
“A university concerned about cost and quality would restructure general education around a limited number of courses designed specifically for that purpose — classes that tackle big, interesting questions from a variety of disciplines. Harvard, with its Humanities 10 seminars, and the University of Maryland, with its I-Series, have recently taken steps in that direction. But this approach will achieve significant savings only if the courses are designed to use new technology that allows large numbers of students to take them at the same time.”
This statement betrays a belief in the efficacy of teaching complex knowledge to large numbers of students at the same time and in the value of efficiency through technology. For a century now, ever since what was once known as the “Harvard system” (large lecture/small recitation) began to invade college campuses, university general education curricula have been built on the delivery of content to masses of lower level undergraduate students (in the classic Course X 101 lecture hall). The application of technology to this delivery system is just a different way to do the same thing — sever the connection between teacher and learner.
A teacher on a screen or as the hidden hand behind an algorithm is no more connected to a learner than is the “sage on the stage” in a lecture hall seating 100, 500, or 800. And I challenge you to find a study run by a cognitive scientist (as opposed to an educational or disciplinary researcher) that demonstrates that the learning outcomes from such disconnected learning exceed those one obtains in a smaller classroom where real connections between teacher and learner are the norm and collaborative learning is the standard. Such studies may exist. And if they do, I’d love to read them.
The real problem is one that Pearlstein doesn’t acknowledge, namely that in today’s challenging fiscal environment in public higher education, fraught with legislative disinvestment, spiraling discount rates, and other financial pressures he doesn’t acknowledge (especially growing amounts of deferred maintenance) general education is all about the money. At today’s enrollment driven public college or university, what really matters is butts in seats. If you can’t filled the seats, there is no money. That’s true at the department level, but also at the institutional level.
In fact, Pearlstein’s suggestion is in line with the tried and true approach to this budget model, namely, let’s find a way to let “large numbers of students to take [their general education courses] at the same time.”
Why? Because if we don’t, our budget model will break. Plain and simple.
Thus, I’m not impressed by Pearlstein’s notion of creating something new and cost efficient that would be somehow different. I don’t want cost efficient general education. I want quality general education where students actually learn a subject — something quite different from “great talks by one or more professors and outside experts [combined] with video clips, animation, quizzes, games and interactive exercises — then supplementing that online material with weekly in-person sessions for discussions, problem solving or other forms of “active learning.”
Who, by the way will hold those “in-person” sessions if 800 students are taking the class? And more to the point, who will staff the ““labs” open day and night that use tutors and interactive software to provide individualized instruction in math and writing until the desired competency is achieved.”
Oh, wait. He must mean graduate students…
And so we are back to the economics of the thing. You can’t have “in-person sessions” for large numbers of students and late night labs for large numbers of students unless you are paying graduate students near-starvation wages. It just doesn’t work. Sorry.
A better solution is to rethink the very notion of how we deliver general education altogether. As Matt Reed wrote in his response to Pearlstein’s argument in Inside Higher Ed:
Cheaper, better general education? We have an entire sector for that, too. Research universities are called “research universities” for a reason. If you want a place that values teaching, community colleges are everywhere. For that matter, so are the former teachers’ colleges that form the backbone of most four-year public systems. If you don’t like the economics of the research university sector — and there are good reasons not to — you have alternatives.
The Ernst & Young study of Australian higher education speaks to this exact issue and I have to say, I’m sympathetic to their argument that we need to rethink public higher education as a sector, not just university by university (our default).
What would that look like in Virginia where I work?
We have two large well-endowed and well-funded flagship universities: the University of Virginia and Virginia Tech. We should just admit that those two universities are, and will continue to be the big kids on the block, offering a broad range of graduate programs and research across their campuses. The other three doctoral universities in our “system” (Virginia isn’t really a system like Wisconsin or Indiana or Texas) should become, in the words of the E&Y report, “niche dominators.”
George Mason, where I work, might dominate the niche(s) most closely connected to Washington, D.C. — policy, security, human rights, etc. Virginia Commonwealth University already dominates the niches of health care and the arts. Old Dominion University might end up dominating niches related to defense (given the Norfolk naval station close by), maritime and/or ecological research, or whatever makes sense for them. To get to these dominating positions in our niches, the three institutions in this sector would then also engage in cost shifting by radically downsizing, or yes, eliminating, their investment in graduate programs in any discipline outside their niches, and pour that money into undergraduate education.
And were I the king of Virginia, I would also shift a significant amount of the resources currently devoted to undergraduate general education — especially every penny spent on a course seating more than 100 students — to the community college system. As Matt Reed points out, community colleges, by and large, do an excellent job in those first two years of the college curriculum — so why not throw bad money after good and give it to them?
Don’t believe me when I say they do a good job? A student who enrolls at George Mason University after completing an AA degree from a community college is more likely to graduate from our university than one who enrolls with us as a freshman. So, who’s doing a better job when it comes to general education?
Of course, everything I’ve written in this series flies in the face of both generally accepted practice in American higher education, and our common desire to be more like University X or Y who I likely see as being more of a “real university” than the one where I work.
I guess it’s probably a good thing I won’t ever be king of Virginia.
In my previous post in this series, a response to a column my colleague Steve Pearlstein wrote in the Washington Post over the weekend, I discussed some difficult choices that public universities will need to make in the future as enrollments change, legislative investment declines, and options for students proliferate. And just to be clear, I’m very specifically talking about public colleges and universities, not other higher ed institutions, while Pearlstein generalizes across the higher education spectrum.
Less research, more teaching: It’s simply not the case, as Pearlstein erroneously claims, that the vast majority of work published in the humanities and social sciences is not cited by other scholars and so has no value. As Yoni Applebaum pointed out yesterday, Pearlstein is guilty of citing bad data when he repeats this claim. We don’t accept such carelessness from our students, so we shouldn’t accept it from our professors.
But, being wrong about one thing doesn’t make him wrong about everything.
I happen to think he is correct when argues that we should, “offer comparable pay and status to professors who spend most of their time teaching, reserving reduced teaching loads for professors whose research continues to have significance and impact.”
One of the questions the Ernst & Young report on Australian higher education asks is: “Can your institution maintain a strong competitive position across a range of disciplines?”  I would say that the answer is “no” for the vast majority of public colleges and universities in the U.S. There just isn’t enough money to go around in public higher education, and, really, how many doctoral programs in X, or MA programs in Y, or BA programs in Z, does a state higher education sector need?
But we all seem to want to offer everything to our students, leading to a lack of differentiation. The result is market confusion and, as the Bain report on U.S. higher education points out, “Who will pay $40,000 per year to go to a school that is completely undistinguished [from similar schools]?”
What’s the solution? First, as I argued in my previous post, we need to eliminate some programs, and downsize others. In addition to the examples I offered earlier (including my own department, which I argue should be downsized over time), I would offer up the examples of Geology and Philosophy. According the State Council of Higher Education in Virginia, in the 2013-14 academic year, the top 10 public colleges and universities in the state awarded 108 bachelors degrees in Philosophy and 126 in Geology. Students graduated with Philosophy degrees from seven different schools, and those receiving Geology degrees graduated from five.
It seems (to me any way), quite reasonable to ask why in a state system, if only slightly more than 100 students per year are receiving degrees in a given discipline, it is necessary to staff up sufficiently (and allocate the physical space) to offer those degrees at five or seven different institutions? Wouldn’t it make much more sense to consolidate those degree programs and offer them at only three or perhaps four institutions? Courses in Geology and Philosophy could (and should) still be offered anywhere in the system as part of a general education curriculum, but given the general lack of differentiation from one university to another, it seems to make sense to focus our resources a bit so we can build stronger programs at fewer institutions.
In such a scenario we would then have to say to students who wanted a degree in Geology or Philosophy: “Here are your three choices in Virginia.” Would that be so wrong?
The Bain report calls this “differentiation” and the Ernst & Young report calls it becoming “niche dominators,” but the result is the same. Students who want a degree in a less popular discipline would have fewer choices, but those choices would be stronger, more diverse, and have more resources.
The second part of the answer, as Pearlstein correctly argues, is that we need a clear path to professional success–pay and status–for excellent teachers who are not productive researchers at our public colleges and universities. This is already the case at the majority of public institutions, but with each passing year, colleges and universities chase elusive rankings that revolve around research productivity by emphasizing research over teaching. Larry Cuban explained how this happened in history departments in a book published way back in 1999, and the story he told then just continues to repeat itself in a variety of disciplines across the country.
If the pathway to success at our top ranked public colleges and universities had two lanes — the research lane and the teaching lane — that led to the same salary, benefits, and other rewards, it’s quite easy to imagine that some significant number of our colleagues would opt for the teaching lane, even if it meant teaching more classes and more students. But the reward and status structure would need to be the same, or almost no one would make this choice when they could have more reward and status in the research lane.
If, however, we got the incentives right, and reduced, eliminated, or consolidated academic programs across state systems, cost structures at our public colleges and universities would look a heck of a lot better than they do today.
My colleague Steve Pearlstein’s weekend column in the Washington Post has generated more than its fair share of attention and, well, backlash. Perhaps the two most cogent negative responses I’ve seen were from Dan Drezner (Pearlstein’s WaPo colleague) and Matt Reed at Inside Higher Ed. Both take Pearlstein to task in some pretty tough, and I have to say, deserving, language, pointing out numerous serious flaws and/or oversimplifications in his analysis.
I want to stipulate at the beginning of this post that I know Steve, I like him, have been a fan of his writing for years (which is not to say I always agree with him), and I know him to be a thoughtful teacher, devoted to getting it right in the classroom, because I have sat in on his classes and had long discussions with him about teaching and learning. In several conversations over the past few years I have enjoyed his fresh perspective on an institution where I have worked for 15 years now and on an industry that I have worked in for 32.
That said, like his critics, I found a lot to disagree with in his essay, especially when you drill down to the specifics. Like any good polemic, though, this column made me think, in particular about the future of the university as we know it. I’ve had a lot to say about that in this space over the years and so I’m indebted to both Pearlstein and his critics for prodding me to think anew about issues that have troubled me for quite a while.
My own perspective on the issues he raises (about cost structures, efficiencies, etc.) comes from a decade as an administrative management consultant in higher ed before I joined the faculty ranks, and more recently in various roles as an associate dean, the director of our largest interdisciplinary program, and as a fellow in both the provost’s and president’s offices over the past couple of years. In these various capacities I’ve had the opportunity to see how academic administration works at more than 80 institutions at a more surface level (as a consultant) and at a much deeper level here at George Mason.
In preparation for this series of posts, I actually read the Bain report Pearlstein cites in his essay, and I would strongly suggest that anyone involved in higher education should read it, as well as “The University of the Future,” a report by Ernst & Young for the Australian Ministry of Education. Sure, sure, these reports are written by “outsiders” and “accountants” and so are easily dismissed by those who want to have a knee jerk response to any assessment of what we do that is written by those who stand outside our industry. But the authors of these reports have done their homework and have some very useful (positive and negative) critiques of the business model of the modern university — and it’s worth noting that they are focused on universities, not community colleges or small liberal arts colleges.
With those reports, and my own experiences as background, I am writing a series of posts (because there is just too much to say in one post) in response to what Pearlstein wrote:
Cost savings. Pearlstein’s solution to cost control is to “cap administrative costs.” If only life were so simple. He is correct that administrators spend way too much time meeting with one another — I know from personal experience what a “meeting culture” we have in academic administration. And he is correct that we spend too much money on administration and can find efficiencies. But it is also the case, that a lot of the proliferation of administration in universities is driven by external mandates–from legislatures, the national government, and the welter of accrediting bodies that run us through the wringer every few years (or every year). Were I king of the world, I’d eliminate every single external accrediting body, wipe the slate clean, and then start over with a system that makes sense. The one we have right now makes anything but sense.
More useful than Pearlstein’s analysis is the one you can find in the Bain report he cites: “As colleges and universities look to areas where they can make cuts and achieve efficiencies, they should start farthest from the core of teaching and research. Cut from the outside in, and build from the inside out.” [p. 5-6] The Ernst & Young report similarly argues for a rebalancing of administrative expenditure away from peripheral activities and back to the core (teaching and research) that produce revenue. 
At the same time, public universities, like mine, should stop already with the amenities arms race. No more new fancy residence halls, no more lux dining or fitness facilities. Students who select a university for its amenities (and I suspect there are actually few such students) should just go somewhere else (and somewhere likely pricier). Public universities have a teaching, research, and economic development mission and amenities advance none of those three goals.
Just as important, however, we need to recognize that academic programs come and go — that their popularity and/or utility in the world we live in is greater or lesser with the passage of time. And this means we have to delete or curtail programs that once were more popular or more useful. Let’s face it, universities almost never do this. As the Bain report puts it: “As new programs are added, old programs often are not curtailed or closed down.” 
As a case in point, I offer two examples (from many possible dozens) from my own university. If you are an undergraduate student at George Mason, you can declare a minor in Urban and Suburban Studies. Declaring such a minor would be a mistake, because you will have to un-declare it at some point in order to graduate. Why? We don’t offer the three required courses in the minor, have not, to my knowledge, offered those required courses since at least 2009, and there is no prospect that we will offer them anytime soon.
When I was an associate dean, I tried to have that minor (and about a dozen others) deleted from the catalog. Ultimately, I was successful in having one — the minor in New Europe — deleted. How did I pull off that great administrative success? I had myself made director of the minor and then, as the director, applied to have it deleted. Not even my evidence that we have never graduated a student with a minor in Urban and Suburban Studies, nor had we offered the required courses for years, swayed the various powers that be to delete the minor.
Ah, but Mills, minors cost us nothing, the argument went. They are made up of existing courses (in most cases) and so just funnel a few extra students into those classes. If only this were a good answer. First, it ignores the fact that everything has costs associated with it, and that when aggregated, those costs add up. In the case of Urban and Suburban Studies, every time we update the catalog someone has to check the copy for that minor. And every time we update the website, someone has to update that page. These are tiny costs, to be sure, but when spread across the more than 50 minors we offer in my college alone, they add up, both as real costs and as opportunity costs.
Lest you think I’m picking on just one minor here, our associate provost for graduate education could give you a list of all the graduate degree or certificate programs at my institution that have never graduated a student, and of the (far too) many graduate courses spread across the university that have never been offered.
And, lest you think I’m picking on others instead of my own department, I would argue that my department (History and Art History) is one of those that ought to contract. Like many (most?) history departments around the country, we are in the midst of a long slow slide in majors, but even as we do slip down this slope, we have no intention of giving up faculty slots and will fight to hold on to what we have had in the past on the premise that getting smaller is bad.
We are very resistant to changes in our departmental size for a whole variety of reasons, some good, some bad. Pearlstein claims caustically and not entirely incorrectly, that getting smaller might mean an increase in our teaching loads and thus, take time away from our research activities. The main way we have managed to reduce teaching loads is on the backs of faculty who are not eligible for tenure — adjuncts and those on annual term contracts that include some benefits. In my department, for instance, over the three previous academic years, 64% of all undergraduate students taking a history course in the fall semesters were taught by faculty who are not eligible for tenure, and who are also paid much, much less.
Given our decline in majors, what really should be happening in my department (and in any other department facing a similar decline in student interest) is that we should constrict the number of upper level courses we offer and, over time, get smaller as retirements and departures happen. In essence, we need to rebalancethe size of our faculty with our enrollment of BA, MA, and PhD students, something we are very reluctant to even consider.
But consider it we must. And not just at George Mason. Back in June, Rebecca Spang, a historian at Indiana University and member of the university’s Faculty Council, said that some departments within the college “may have gotten bigger than they need to be,” and could get smaller. Spang pointed the finger at her own department as one that probably could contract.
Of late, Bryan Alexander has been calling attention to what he calls “Queen Sacrifice” at colleges and universities across the country. These sacrifices are happening and will continue to happen unless we take seriously the notion that as new programs are added, old ones need to close or be curtailed.
And, as the Bain report points out, on the administrative side of the house it is not at the top level or the front line service positions that need to be cut, it’s at the level of middle management [6-7]. The Bain authors are not wrong when they point out that while we can’t easily cut the number of people providing security, mental health counseling, basic tech support, and other similar front line service positions. But we can substantially reduce the layers between those front line service providers and the upper levels of our administrations.
If that means we have fewer or shorter meetings, I’m okay with that.
Every day we receive questions about our touch displays, the technology they use, and how they stack up to our competitors. We’ve been developing large-scale multitouch tables and displays since 2008 and we’ve incorporated a number of different types of technologies as our product line has evolved. With so much difference in touch technology, screen resolution, build quality, and durability choosing between them can be confusing to potential purchasers.
Our focus has always been on creating the highest-quality displays and touch tables for museums and others. We are committed to incorporating the latest and most responsive touch technology. All of our models are lockable, have push button operation, and are housed in aluminum chassis that we design and build here in the US. Our displays and touch tables tend to be more expensive than that of our competitors, but if you understand what goes into them, you’ll understand why.
So how do you evaluate which touch display or touch table is best? Here are the qualities that matter most.
There are huge differences in the fidelity and quality of various touch technologies. Here we outline the two most common technologies that are used in large-scale displays: IR overlays and Projected Capacitive Touch.
IR overlay is the most common type of touch technology. This optical technology uses infrared light. As your finger breaks the beams of light, the system detects a touch point.
IR overlays work pretty well for touch walls and in environments where light can be controlled. They can,however, be susceptible to light interference, particularly by sunlight. The IR overlay requires a bezel around of the edge of the screen so the surface is not completely flat. For this reason, we no longer use IR overlays for any of our standard-sized flat multitouch tables (we still use it for our Drafting Table 65 and Presenter 65 and our Presenter 75 models). The bezel on IR systems can make cleaning and maintenance a bit more difficult as the bezel can collect grime.
The quality of IR overlays can vary significantly, as can the number of touch points that are supported. IR overlays are commonly used in inexpensive displays. These overlays can be purchased to fit existing screens, but these are not “hardened” solutions that perform well in public spaces. It is worth mentioning that if a vendor is selling a touch screen and doesn’t mention which technology is used, it almost always an IR overlay.
Projected Capacitive Touch (PCT, also PCAP) is emerging as the clear choice for touch technology. It has been used in smart phones and tablets for years, but it has been difficult to scale it to work with larger screens. Projected Capacitive touch uses a thin layer of conductive material to form a grid. As voltage is applied an electrostatic field is created across the grid. As users touch the screen (or another capacitive object touches the screen) the field is distorted. Hardware and software are then used to recognize the touch points.
Since Projected Capacitive Touch is non-optical it is impervious to light interference. Unlike IR overlays it is also bezel-less. PCAP is the first choice for most people who understand touch technology well. However, like IR overlays, there are differences in the available PCAP solutions. There are a few different types of PCAP, including: metal mesh, ITO (indium tin oxide), and silver nano wire—all of these can work well—what’s important is the number of touch points and the fidelity and responsiveness of the particular touch panel. We seek out touch solutions that provide the best combination of these qualities. When comparing PCAP displays it is worth looking at specifications such as response time and the number of simultaneous touch points.
HD (high definition) and 4K UHD (ultra high definition) are two display resolutions that are available for large scale touch displays and touch tables. UHD displays have a resolution of 3840 x 2160 vs. 1920 x 1080 for an HD display. For smaller displays, 42” or 46” sizes, HD is generally fine for displaying most content. In fact, currently, most content is developed for HD displays.
Once you start to hit 55” or above the resolution begins to make more and more of a difference (the “pixel density” in these larger sizes spreads out the number of pixels causing degradation of content fidelity). Recently we stopped offering HD resolution for our 55” and larger displays. Regardless of from whom you buy, we would recommend that—unless your budget is very constrained—you “future proof” your investment by purchasing touch screens with UHD display capability.
While most people don’t think too much about the “case” the display comes in, it can make a big difference particularly for public spaces or even semi-public spaces such as schools or corporate settings. We started out working almost exclusively with museums, so we’ve concentrated on developing hardened solutions that are locked down for public spaces. Our cases are made out of aluminum with lockable access panels and we make them as thin and as durable as possible. Again, because this does add cost, the buyer needs to determine whether this level of quality and attention to detail is something worth paying for given their specific use-case.
Most touch displays, and even some touch tables, from our competitors are not completely protected for public spaces. Many displays use cheap plastic from Asia, which is neither durable nor conductive. Aluminum has the advantage that it can help to reduce heat inside of the display.
The Complete Package?
Our 55” Presenter, Platform, Pro, and Drafting Tables all have 4K UHD displays, with the latest in 3M Projected Capacitive touch technology in a rugged, aluminum, turn key system. We are proud to say that we are currently the only company to offer a 55” 4K UHD display with 3M touch technology. Our products are designed, built, and supported here in the US, by an American-owned company. We take pride in what we do.
If you’ve made it this far in this blog post, we hope you have a better understanding of what makes our products unique.
Dan’s visit to the Apple Store prompts a discussion of the new iPad Pro, and just what you can and can’t do on Apple’s tablet. Are we all just too old to give up our laptops for tablets? The New York Times and Google recently teamed up to deliver another way to use your smartphone – for virtual reality, via Google Cardboard. Is this the beginning of an expansion of VR? Or is it just the View-Master of Mills’ and Stephen’s youth reborn? Finally, we discussed the recent study of media use by tweens and teens by Common Sense Media that highlighted the digital disparities facing low-income teens. In particular, although most have smartphones, they lack access to laptops or desktops on which to do the increasing amount of online homework teachers are assigning. Stephen and Dan talked about the key role of public libraries in giving teenagers access to computers and wireless Internet.Related Links
Running time: 47:50
Download the .mp3
This fall Ideum began an ongoing collaboration with The Institute of American Indian Arts (IAIA). IAIA is a four-year tribal college in Santa Fe, NM that offers a wide array of studies such as Studio Arts, Museum Studies, Digital Art, Cinematic Art, Creative Writing, and Indigenous Liberal Studies. We look forward to working with IAIA students in exploring those paths and making new discoveries through a paid semester-long internship as an important team member.
The first project of this collaboration is a large interactive touchscreen application to be installed at the Museum of Contemporary Native Arts (MOCNA) January 22-July 31, 2016 as part of The Influence & Art of Lloyd Kiva New exhibit. This project is the perfect marriage of tradition and technology. The interactive will feature two textile exhibit applications, a textile collection viewer and a textile creator, both designed and developed by Ideum in collaboration with experts from the Museum of Contemporary Native Art. The textile collection viewer features printed textiles created by IAIA students during the 1960’s and 1970’s under New’s artistic direction, drawn from the museum’s permanent collection and documented by IAIA’s Museum Studies students.
The textile creator takes deconstructed elements of these textile designs and allows users to digitally experience the printmaking process via the touchscreen and a projection wall. Being able to use traditional indigenous imagery to inform contemporary indigenous art and exhibits while embracing innovative ways of communicating indigenous perspectives, culture, and design feels like a great fit for this collaboration, and we are excited to see how we can shape that future together.
The multitouch part of the interactive application runs on an Ideum custom 38-inch “stretch” monitor. The textile collection images and the textiles created by museum visitors will be projected onto the gallery walls. Visitors will have the option to share their creations to an IAIA Tumblr gallery. We will share more about this exciting project when it opens to the public in January! To learn more about Ideum custom hardware take a look at our blog post. Visit our website to learn more about our Creative Services projects.