ToolCenterMain Page | About | Help | FAQ | Special pages | Log in

Printable version | Disclaimers | Privacy policy | Current revision

Zotero (aka Firefox Scholar aka SmartFox)

Revision as of 13:44, 2 February 2006 by (Talk)

angelina nude anne hathaway nude artificial vagina ass munchers ass parade bald pussy bbw sex bdsm library bdsm toy bedroom bondage blacks on blonde blonde jokes bondage restraint boob mature breast lover britney nude brutal blow job brutal dildo cerita seks cheerleader lesbian cheetah girls chicago escort chick dick cocks fighting cunts cyber sex diaper lover dick sucker dirty slut disney porn disney xxx divas nude double dildo dragonball xxx dripping pussy drunk slut ducky porn eating pussy ebony sex eminem ass erotic ecard erotic ecard erotic hypnosis erotic massage erotic vacation erotic wallpaper erotic wrestling erotik hikayeler erotik resimler extreme gang bang fake tits female domination female ejaculation fhm ficken fisting foto telanjang free sex cam free threesome frog sex ftv girls fuck fest gaping giant cocks gilmore girls good bye lover hairy pussy hairy vagina hardcore junky hardcore partying hollaback girls homemade porn horny clit horny housewife
(There is currently no text in this page)

, makers of D2K, I2K and T2K.

is the tool-building arm of , a group now developing a Networked Interface for Nineteenth-century Electronic Scholarship.

It was founded in 2003 to begin a broad and especially a practical dissemination of Alfred Jarry's ideas into the field of general education. Poet and intellectual entrepreneur, Jarry invented the discipline of 'Pataphysics, which he called "the science of exceptions" and "the science of imaginary solutions". 'Patacriticism is a scholarly and pedagogical derivative of Jarry's late nineteenth-century initiative.

"Since Jarry, a healthy crop of 'pataphysical and 'patacritical resources has sprung up, like wildflowers among the wheat, in the great plains of imagination. Is the West calling a new Charles W. Eliot LLD to fashion an "N-Dimensional shelf of books", "The Jarry Classics". Its foundations were laid in the late 19th-century's remarkable premonition of 'patacriticism.

"ARP is a workshop for designing and building educational tools. While ARP's tools are digital, they take their origin from a continuing investigation into the technology of the book and its extended network of communicative mechanisms. The power and sophistication of that network-- its capacity for simulating and harnessing creative human invention -- dwarfs our current digital tools and networks. Nonetheless, digital instruments are already clearing a space for themselves. We believe that digital culture will prosper to the degree that it can expose, understand, and augment our inherited bibliographical technology." ]

at the  promotes cartography, historical geography and geographic information science (GIS) as essential disciplines within the field of ancient studies through innovative and collaborative research, teaching, and community outreach activities.
is a Flash tool that allows the user to parse a piece of music for analysis.

The changing faces of Technology:

A category of hardware and software that enables people to use the Internet as the transmission medium for telephone calls. For users who have free, or fixed-price Internet access, Internet telephony software essentially provides free telephone calls anywhere in the world. To date, however, Internet telephony does not offer the same quality of telephone service as direct telephone connections. There are many Internet telephony applications available. Some, come bundled with popular Web browsers. Others are stand-alone products. Internet telephony products are sometimes called IP telephony, Voice over the Internet (VOI) or Voice over IP (VOIP) products.

Interent Technology Links:

hbuvyixcyvlk yfvy y y ygf yvufutyfvu

Since 1994, the has used digital media and computer technology to change the ways that people—scholars, students, and the general public—learn about and use the past. We do that by bringing together the most exciting and innovative digital media with the latest and best historical scholarship. We believe that serious scholarship and cutting edge multimedia can be combined to promote an inclusive and democratic understanding of the past as well as a broad historical literacy that fosters deep understanding of the most complex issues about the past and present. CHNM's work has been recognized with major awards from the American Historical Association and other national organizations, as well as with grants from the Sloan, Rockefeller, Gould, Delmas, and Kellogg foundations, the National Endowment for the Humanities, the Department of Education, and the Library of Congress. Many CHNM projects have been undertaken in collaboration with the American Social History Project (ASHP)/Center for Media and Learning at The Graduate Center of The City University of New York (CUNY). A project of the .

"The collaborative timeline tool was designed by Casey Alt and Vince Dorie as a means for communities to construct multiple parallel timeline categories representing various structural dimensions significant to the development of a research field (or just about any subject of interest). Events can be added to the timeline by any member of the designated community together with any documentation related to the event, including original documentation to be stored in our database, or links to already existing documentation on the web. Links with commentary and accompanying documentation can be drawn between events considered closely connected. Events can be 'flagged' by members of the community for importance. The entire structure can be filtered to reflect the views of individual contributors or groups of contributors. All levels of the timelines, including events, commentary, and links are supported by a forum-type threaded discussion that is fully searchable." , is developing a data model and set of tools that will allow users of digital resources to assemble and share virtual "collections" and to present annotated "exhibits" and re-arrangements of online materials. These critical rearrangements can of course bring together materials that are variously diverse — materially, formally, historically.

The first Rossetti rearrangements will be undertaken by the Archive's general editor and by a few invited literary scholars and art historians, who will act as guest critics and curators, offering radically different perspectives on Rossetti and his circle, all based on the same corpus of digital files. Later, individual users will be able to assemble and comment on Archive materials in private collection spaces, choose whether to make those assemblages available to others, and then build and share annotated exhibits based on their own virtual collections or on existing, user-created work.

This toolset, developed under the direction of Bethany Nowviskie, aims to reveal the interpretive possibilities embedded in any digital archive by making the manipulation and annotation of archived resources open to all users. Once the basic collection/exhibition schema has been tested on the Rossetti Archive, it will be made available to all NINES projects.

"Colloquium is a webcasting tool developed under the auspices of NITLE in order to facilitate the sharing of teaching resources among NITLE colleges. Using colloquium's streaming audio or video features,

A faculty member can "guest lecture" in a colleague's class on a distant campus without leaving home.

Visiting lecturers can be shared, in terms of both cost and content, among several institutions."

Setting industry standards for digital collection management, CONTENTdm provides tools for everything from organizing and managing to publishing and searching digital collections over the Internet.

The most powerful and flexible digital collection management package on the market today, CONTENTdm handles it all—documents, PDFs, images, video, and audio files. CONTENTdm is used by libraries, universities, government agencies, museums, corporations, historical societies, and a host of other organizations to support hundreds of diverse digital collections.

"In order to facilitate our research activities, application environment for data mining. D2K - Data to Knowledge is a rapid, flexible data mining and machine learning system that integrates analytical data mining methods for prediction, discovery, and deviation detection, with data and information visualization tools. It offers a visual programming environment that allows users to connect programming modules together to build data mining applications and supplies a core set of modules, application templates, and a standard API for software component development. All D2K components are written in Java for maximum flexibility and portability."

is social bookmarking software that allows a whole bunch of functionality:

by choosing a unique tag, groups of users can build a shared, topical bookmark list "The (DCP) brings together faculty and graduate students from across the UC system who are actively engaged with the history and theory of new digital technologies and the ways in which they are changing humanistic studies and the arts. It also serves as an agency through which faculty and graduate students who have not been actively engaged in these matters can learn about them in order to incorporate them in their future work. The project is based at UC Santa Barbara, where the English Department is the home to Transcriptions, an NEH-supported project concerned with digital technology in research and teaching. The Multi-Campus Research Group (MRG) sponsors five interrelated activities." ] on the Tool Summit are being written on this wiki.

Exploration of Resources

Joanna is interested in notions of "presence" in 18th-century French and English philosophers. She calls up her Scholar’s Aide (Schaide) utility to find the texts she wants to study. By clicking and dragging texts that meet her needs into Gatherings she creates a personal study collection that she can examine. An on-line thesaurus helps her put together a list of words in French and English that indicate presence (such as near and proche), and she searches for texts containing those words. She then launches a Schaide search that only looks in her Gathering, even though the texts are in different formats and at different sites. When she checks in after teaching her Ethics of Play class she finds a concordance has been gathered that she can sort in different ways and begin to study. She saves her concordance as a View to the public area on the Schaide Site so her research assistant can help her eliminate the false leads. Maybe she’ll use the View in her presentation at a conference next week once she’s found a way to visualize the results according to genre.

How can Humanists ask questions of scholarly evidence on the Web? Humanists face a paradox of abundance and scarcity when confronting the digital realm. On the one hand, there has been an incredible growth in the number and types of documents reflecting on our cultural heritage that are now available in digital form. Projects like Google Print will in the coming years dramatically expand that abundance. Tools for discovering, exploring, and analyzing those resources remain limited or primitive, however. Only commercial tools, such as Google, search across multiple repositories and across different formats. Such commercial tools are shaped and defined by the dictates of the commercial market rather than the more complex needs of scholars. The challenges faced by scholars using commercial search tools are:

• It is hard to ask questions across intellectually coherent collections. What the inquirer considers a collection is usually spread across different on-line archives and databases, each of which will have a different search interface.

• Many resources are inaccessible except with local search facilities and many are gated to prevent free access.

• You cannot ask questions that take advantage of the metadata in many electronic texts indexed by commercial tools.

• You cannot ask questions that take advantage of structure within electronic scholarly texts (such as those encoded in TEI XML.)

• Where there is structure, it is rarely compatible from one collection to another.

• Collections of evidence are in different formats, from PDF to XML.

What kinds of tools would foster the discovery and exploration of digital resources in the humanities? More specifically, how can we easily locate documents (in multiple formats and multiple media), find specific information and patterns in across large numbers of differently formatted documents, and share our results with others in a range of scholarly disciplines and social networks? These tasks are made more difficult by the current state of resources and tools in the humanities. For example, many materials are not freely available to be crawled through or discovered because they are in databases that are not indexed by conventional search engines or because they are behind subscription-based gates. In addition, the most commonly used interfaces for search and discovery are difficult to build upon. And, the current pattern of saving search results (e.g., bookmarks) and annotations (e.g., local databases such as EndNote) on local hard drives inhibits a shared scholarly infrastructure of exploration, discovery, and collaboration.

The tasks are large, and many types of tools are needed to meet these goals. Among other things, our group saw the need for tools and standards that would facilitate:

• Multi-resource access that provide the ability to gather and reassemble resources in diverse formats and to convert and translate across those resources.

• A scholarly gift economy in which no one is a spectator and everyone can readily share the fruits of their discovery efforts.

• Serendipitous discovery and playful exploration.

• Visual forms of search and presentation.

But the group had a strong consensus, concluding that the most important effort would be one that focused on developing sophisticated discovery tools that would allow new forms of search and make resources accessible and open to discovering unexpected patterns and results. We described this as a “Google Aide for Scholars” (or Schaide in the story above) — something much broader than the bibliographic tool Google Scholar — that would be built on top of an existing search engine like Google but would allow for much more sophisticated searches than Google. Our talk of “Google” was not, however, meant to limit ourselves to a particular commercial product but rather to signal that we were interested in building on top of the existing infrastructure created by the multi-billion dollar search-industry giants such as Yahoo, MSN, and Google. Some of Schaide’s features would be:

• It would take advantage of commercial search utilities rather than replace them.

• It would allow scholars to create gatherings of resources that fit their research rather than be restricted by resources. These gatherings could be shared.

• It would allow scholars to formulate search questions in different ways that could be asked of the gatherings.

• It would allow scholars to ask questions that take advantage of metadata, ontologies and structure.

• It would negotiate across different formats and different forms of structure.

• It would allow researchers to save results for further study or sharing.

• It would allow researchers to view results in different ways.

Just as Google and the other search engine companies have created an essential search infrastructure that a tool-building effort like ours needs to leverage, there are also specific tool-creation efforts underway that we should at least examine closely and perhaps even embrace. Several were mentioned and discussed as part of the brainstorming process: Pandora (a search tool for music); Content Sphere (a personal search engine developed by Michael Jensen); Meldex (another music search tool); Syllabus Finder and H-Bot (tools that make use of Google API developed by Dan Cohen at CHNM); Firefox Scholar (a scholarly organization and annotation tool, also from CHNM); I Spheres (middleware that sits on top of digital collections); TAPoR (an online portal and gateway to tools for sophisticated analysis and retrieval based at McMaster University); Antartica (commercial data mining by Tim Bray); Citeseer; Proximity (a tool for finding patterns in databases developed by Jensen); personal search from commercial search engines (Google personal search and Yahoo Mindset); Amazon’s A9; Cluty; and data-mining packages (NORA, D2K, and T2K from NCSA).

We developed several key specifications for this new Google for Scholars. It would be extensibile through web services and, hence, might work as a plug in to Firefox or some other open client. It would be transparent in the sense that it would show you to see how it was working rather than simply hide its magic behind the scenes. It would also offer customizable utilities like a “query builder” that would allow you to write your own regular expressions and ontology. Most important, it would be able to plug in any ontology; filter results in complex ways and save those filters; classify and tag results; display, aggregate, and share search results.

But the success of such a tool also rests on the formatting of the resources that it seeks to access for the scholar. Scholarly resources — whether commercial aggregations (such as ProQuest Historical Newspapers), digital libraries (such as American Memory and Making of America), gated repositories of scholarly articles (such as JSTOR), and especially the emerging mega-resource promised by Google Print — need to be visible and open. Achieving that goal is more of a social and political problem than a technical challenge. But we can facilitate that goal by offering guidelines for how to make a site visible through existing and emerging standards, such as OAI and the XML approach followed by Google.

In general, then, we see on the one hand the need for a lobbying group that will promote making resources openly available and discoverable. On the other hand, we believe that the actual tools development can proceed in an incremental and decentralized fashion through three different development groups: (1) a group developing a client-based tool (perhaps built into the browser) that can access multiple resources but using Google; (2) a group developing a server-side repository that would aggregate information from searches and annotations; and (3) a decentralized group (or set of groups) that would write widgets, web services, and ontologies that would operate in the extensible client software as well as off the server.

Written by Roy Rosenzweig and Geoffrey Rockwell

Tools for interpretation:

This is not an existing tools project but rather a proposal for a tools project arising from the Digital Tools Summit at the University of Virginia. For more on the Summit see or notes at

Interpretation develops out of an encounter with material or experience, and out of a reaction to some provocation--in the form of ambiguity, contradiction, suggestion, aporia, uncertainty, etc. In literary interpretation, you start with reading, and when you stumble on an ambiguity, you decide if this is an interesting ambiguity, possibly a meaningful one, possibly an intended one. Next, you ask what opportunities for interpretation are offered by this ambiguity? In the next phase, interpretation moves from private to public, from informal to formal, as you rehearse and perform it, intending to persuade other readers to share your interest and your conclusions.

Commentary is one way to convey interpretation, and it can be embodied as annotation. Annotation might need to be attached to several points in the corpus of material under study: annotation always needs at least one point of attachment. You could have classes of commentary as well: a note to myself, a note to share, a note that has been peer-reviewed, a note that other people have noticed, a note that has been the subject of commentary, etc. Such annotations should attach to any type of media, should allow production of commentary in many media as well.

The discussion group on tools for interpretation identified the following abstract sub-components of annotation, as an interpretation-building process, grouped here by phases:

Phase 0
0.1 Identify the environment (discipline, media)

0.2 Encounter a resource (search, retrieval)

Phase 1
1.1. Explore a resource

1.2 Vary the scope/context of attention

Phase 2
2.1 Tokenize, segment the resource (automatically or manually)

2.2 Naming parts, renaming parts

2.3 Align annotation with parts (including time-based material)

2.4 Vary or match the notation of the original content

Phase 3
3.1 Sort and rearrange the resource (perhaps in something as formal as a semantic concordance, perhaps just in some unspecified relationship)

3.2 Identify and analyze patterns that arise out of relationships

3.3 Code relationships, perhaps in a way that encourages the emergence of an ontology of relationships (Allow formalizations to emerge, or to be brought to bear from the outset, or to be absent)

We considered that phases 0 and 1 were probably outside the scope of our immediate charge (though we hoped that other groups, like the group focusing on exploration, might help with some of these phases), and we thought that phases 2 and 3 were probably pretty squarely within the territory of tools for interpretation.

Further, we thought that tools for interpretation should ultimately allow you to do these things (including phases 0-3) in arbitrary order, and on or off the web (in the field, in other words). Though actually publishing annotations/interpretations/commentary is probably out of scope for a tool for interpretation, narrowly defined, we agreed that there's no question that one would want to disseminate interpretation at some point in the process, and that those annotations should ideally be connected to networked resources and to other interpretations.

We spent some time discussing the audience for the kind of tools we were imagining: developers? power users? All humanists? High school students? WIth respect to users, we agreed that it was best to develop for an actual use, not a hypothetical one, but that it was also salutary to build for more than one use, if possible. This brought up the question of whether we envisioned tools for more than one (concurrent) user: in other words, are we talking about seminar-ware? How collaborative should these tools be, and how collaborative must they be? Should they have an offline mode (for some, the answer to this question was clearly yes)? Should they allow, support, or require serial collaboration? In the end, we decided that the best compromise was a single-user tool designed in awareness of a collaborative architecture (and we hoped to get some more information about what such an architecture might look like, from the collaborative group).

We also discussed some more specific technical matters, for example:

At this point, in an effort to bring our discussion to bear on a particular tool, and to cut short an abstract discussion of tools (in general) for interpretation, we focused on a very specific kind of tool for annotation, namely a "highlighter's tool." We supposed that this tool would:

Well pleased with ourselves for being so close to actual specs for an actual tool, we decided to go a step further and name some specific examples of uses and users. The following list suggests the range of topics, sources, and goals that we hope such a tool (or toolkit) might support:

At the end of the discussion, a straw poll showed that half of the eighteen people in the room wanted to build this kind of tool, and all of them want to use it. We closed the discussion by affirming, once again, that we should build for particular applications and users but also in view of an agreed-upon set of requirements. The building process should include communication, if not collaboration, with other developers. We hope that follow-up from this event will result in people in this discussion realizing a framework for collaboration, and building tools for interpretation such as the ones imagined in this discussion. ]

The final report includes sections on four possible tools:


Exploration of Resources


Time, Space, Uncertainty

as well as Conclusions

  1. redirect Edition Production & Presentation Technology (EPPT)
  2. REDIRECT Digital Tools Summit - Exploration of Resources
  3. REDIRECT Imaging the French Revolution
(The Flexible Extensible Digital Object Repository Architecture) is exactly what the name suggests, an architecture for storage of and access to digital objects (stored in METS-encoded XML). Includes a set of APIs for access to the repositiory. Funded by Mellon, developed at UVA/Cornell, open source.

These were among the tools mentioned at the Fall 2004 Washington DC Area Forum on Technology and the Humanities: The Educated Browser: SmartFox, the Scholar's Web Browser

While many libraries and museums have put materials online, often at great expense, scholars and researchers using these institutions' online catalogs, collections, and documents currently have no easy or powerful way to use these resources, often resorting to a cobbled-together set of stand-alone applications (such as EndNote and Word) to make citations, take notes, and create personal collections and bibliographies. Few libraries and museums have had the resources to improve the user experience of their valuable resources.

The Center for History and New Media is building an open-source package of tools for libraries and museums that will work right in the web browser, where most research is now done. We are calling the project SmartFox: The Scholar's Web Browser, and it will enable the rich use of library and museum web collections with no cost—either in dollars, or probably more importantly, in secondary technical costs related to their web servers--to institutions. This set of tools will be downloadable and installable on any of the major open-source browsers related to the increasingly popular Firefox web browser: Firefox itself, Mozilla, and the latest versions of Netscape and the AOL browser (all based on the Firefox code base).

SmartFox will enable users, with a single click, to grab a citation to a book, journal article, archival document, or museum object and store it in their browser. Researchers will then be able to take notes on the reference, link that reference to others, and organize both the metadata and annotations in ways that will greatly enhance the usefulness of, and the great investment of time and money in, the electronic collections of museums and libraries. All of the information SmartFox gathers and the researcher creates will be stored on the client's computer, not the institution's server (unlike commercial products like Amazon's toolbar), and will be fully searchable. The Web browser, the premier platform for research now and in the future, will achieve the kind of functionality that the users of libraries and museums would expect in an age of exponentially increasing digitization of their holdings.

SmartFox is being developed by the Center for History and New Media (CHNM) at George Mason University with funding from the Institute of Museum and Library Services (IMLS).


Main Page
Current events
Recent changes
Random page
View source
Editing help
This page
Discuss this page
New section
Printable version
Page history
What links here
Related changes
My pages
Log in / create account
Special pages
New pages