Highway to Hell

Today we're living easy, living free because we're on the highway to Hell! We have a season ticket on a one-way ride to explore the Hell-mouth, a popular depiction of Hell in illuminated manuscripts. Raising a little Hell: full-page miniature depicting Archangel Michael locking the entrance to the Hell-mouth, from...

CUNY Games Conference 4.0

January 22 2018 to January 23 2018
Call for Papers
Attention all CUNY and non-CUNY Gamers! We have extended the deadline for abstracts until November 1st to accommodate additional faculty and student presentations! The CUNY Games Network is excited to announce the third annual CUNY Games Festival to be held on January 22 and 23, 2018 at the CUNY...
CUNY Graduate Center
365 5th ave
New York, NY 10016
United States

Elisa Beshero-Bondar Digital Dialogue

In this talk, I will introduce the collaboration of the Pittsburgh Bicentennial Frankenstein team with MITH to produce a new and authoritative digital edition of the 1818, 1823, and 1831 published texts of Frankenstein linked with the Shelley-Godwin Archive edition of Mary Shelley’s manuscript notebooks. We have been hard at work on the project since fall, and aim to complete the project by May 2018, the bicentennial of the novel’s first publication.

Preparing the edition has given us a fascinating vantage point on early work with 1990’s hypertext, as we began our work by up-converting hundreds of “hypercard” files in Stuart Curran and Jack Lynch’s Pennsylvania Electronic Edition of Frankenstein. That hypertext edition represented groundbreaking digital scholarship in the era of web 1.0, by deploying an interface for reading the 1818 and 1831 texts in juxtaposed parallel texts. Our work on the project has involved polishing and repurposing the code of Curran’s and Lynch’s electronic editions of the 1818 and 1831 texts. With help from Rikk Mulligan, Digital Scholarship librarian at Carnegie Mellon University, we have been correcting our restored text against photo facsimiles of the originals, and we have prepared plain text and simple XML editions from OCR of the 1823 edition, derived via ABBYY Finereader, and formatted like our editions of the 1818 and 1823. We have been preparing a new edition in TEI by first processing these documents with CollateX, which computationally locates the points of variance (or “deltas”) among the editions and outputs these as a single critical edition with TEI XML critical apparatus markup.

Collating the print editions establishes a basis for one last and especially challenging stage of our project. We are now working with Raffaele Viglianti to integrate the Shelley-Godwin Archive’s manuscript notebook drafts of Frankenstein with our critical edition of the published novels. For this we are planning a new implementation of TEI critical apparatus markup to point to specific locations in the manuscript notebooks. This will provide a way to link a reading interface of the novel that highlights “hotspots” of variance in the print edition and that links into relevant passages in the Notebooks.

We will be offering our bicentennial edition to update the one currently hosted by Romantic Circles. Our new edition’s reading interface should invite readers to learn the interesting story of how the events and characters of the novel changed over the first decades of its life, from the time of its first drafts by its 18-year-old author to the changes imposed by authors and editors over three published editions from 1818, 1823, and 1831. We hope our edition will inspire fresh investigations of longstanding questions about Frankenstein’s transformations, such as the extent of Godwin’s interventions in the text in 1823 and how many of these these persist in the 1831 text. This dialogue offers a chance to share views of the new TEI edition underway, and invites reflection and discussion of our textual methods in stitching together our new textual “monster.”

The post Elisa Beshero-Bondar Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

Call for Collaborators: The Open Digital Archaeology Textbook Environment (ODATE)

The Open Digital Archaeology Textbook Environment is a collaborative writing project led by myself, Neha Gupta, Michael Carter, and Beth Compton. (See earlier posts on this project here).  We recognize that this is a pretty big topic to tackle. We would like to invite friends and allies to become co-authors with us. Contact us by Jan 31st; see below.

Here is the current live draft of the textbook. It is, like all live-written openly accessible texts, a thing in the process of becoming, replete with warts, errors, clunky phrasing, and odd memos-to-self. I’m always quietly terrified to share work in progress, but I firmly believe in both the pedagogical and collegial value of such endeavours. While our progress has been a bit slower than one might’ve liked, here is where we currently stand:

  1. We’ve got the framework set up to allow open review and collaboration via the Hypothes.is web annotation framework and the use of Github and gh-pages to serve up the book
  2. The book is written in the bookdown framework with R Markdown and so can have actionable code within it, should the need arise
  3. This also has the happy effect of making collaboration open and transparent (although not necessarily easy)
  4. The DHBox computational environment has been set up and is running on Carleton’s servers. It’s currently behind a firewall, but that’ll be changing at some point during this term (you can road-test things on DHBox)
  5. We are customizing it to add QGIS and VSFM and some other bits and bobs that’d be useful for archaeologists. Suggestions welcome
  6. We ran a test of the DHBox this past summer with 60 students. My gut feeling is that not only did this make teaching easier and keep all the students on the same page, but the students also came away with a better ability to roll with whatever their own computers threw at them.
  7. Of six projected chapters, chapter one is in pretty good – though rough – shape

So, while the majority of this book is being written by Graham, Gupta, Carter and Compton, we know that we are leaving a great deal of material un-discussed. We would be delighted to consider additions to ODATE, if you have particular expertise that you would like to share. As you can see, many sections in this work have yet to be written, and so we would be happy to consider contributions aimed there as well. Keep in mind that we are writing for an introductory audience (who may or may not have foundational digital literacy skills) and that we are writing for a linux-based environment. Whether you are an academic, a professional archaeologist, a graduate student, or a friend of archaeology more generally, we’d be delighted to hear from you.

Please write to Shawn at shawn dot graham at carleton dot ca to discuss your idea and how it might fit into the overall arc of ODATE by January 31st 2018. The primary authors will discuss whether or not to invite a full draft. A full draft will need to be submitted by March 15th 2018. We will then offer feedback. The piece will go up on this draft site by the end of the month, whereupon it will enjoy the same open review as the other parts. Accepted contributors will be listed as full authors, eg ‘Graham, Gupta, Carter, Compton, YOUR NAME, 2018 The Open Digital Archaeology Textbook Environment, eCampusOntario…..

For help on how to fork, edit, make pull requests and so on, please see this repo


Featured Image: “My Life Through a Lens”, bamagal, Unsplash

ISAM 2017 – Libraries are for making

I recently participated in the International Symposium of Academic Makerspaces. I presented a paper, co-authored by Jennifer Grayburn (formerly a Makerspace Technologist, and now at Temple University’s Digital Scholarship Center). I present here the slides and talking notes of the 7 minute presentation, and a link to the full paper [Link to PDF].

Good morning, and thank you for coming. My name is Ammon Shepherd. My paper, co-authored by Jennifer Grayburn, looks at how libraries are uniquely suited to provide makerspaces for traditionally book-bound disciplines.

Jen Grayburn works at the Digital Scholarship Center, located in Paley Library at Temple University in Philadelphia.

I am located at the Scholars’ Lab in the Alderman Library at the University of Virginia in Charlottesville, Virginia. We both come from humanities backgrounds, so this paper is light on empirical research and heavy on anectodal evidence, but we are both working on tracking data and analyzing that with research questions in mind. To wit, our main question we sought to address with this paper is, How can we get more humanities researchers into our library makerspaces? In the paper we posit that libraries fulfill a unique roll at universities because they are typically departmentally agnostic. Libraries, in general, cater to all faculty, staff, students, and even members of the community.

With that in mind, Jen and I looked at both of our spaces (Yet Another Cross-space Comparison) and found four comparable attributes of how we attract and support research from humanities researchers. In this paper we look at four attributes:

  1. accessibility,
  2. contextualization,
  3. collaboration,
  4. outreach

Both our spaces seek to piggy back on the aforementioned phenomenon of Libraries as an academically neutral space. But adding technology normally only seen in the STEM fields proves to be a mental barrier to humanities researchers.

To address this, both spaces first sought to break down any physical barriers to entry. We are both located in open spaces in the main library on campus.

The Scholars’ Lab space is in a prime study and group-work area with great natural lighting. Physically open access is relatively easy to address, but mental barriers take more detailed planning. The remainder of the comparison points, and some take aways at the end, help to address the issue of breaking down mental barriers.

The major issue facing humanities research is the mental frustration with technology; usually the reason they give for picking the humanities in the first place. How then to ease that burden?

Both the DSC and SLab are staffed with individuals from very diverse backgrounds and skill levels. The DSC has full-time library staff, post-docs and graduate students from departments ranging from science, architectural history, and engineering to business.

The SLab has 3 full-time staff with library and history degrees, and graduate and undergraduate paid, part-time student employees from language, engineering and chemistry backgrounds.

This broad academic background encourages students from all fields to use our spaces. One anecdotal account comes from a bio-med student who felt more comfortable prototyping in our space because she didn’t feel an inferiority complex. She probably thought, they’re just historians, what do they know? 🙂

Encouraging collaboration enriches both staff and users, and both spaces encourage staff to work on personal research and collaborate with others.

The DSC partnered with the Ginsburg Library to offer free 3D printing for research, educational or clinical purposes. The 3D print of a pelvis from a CT scan is such a result. They also partnered with the Center for Advancement of Teaching to provide grants to faculty ranging from $500-$3500.

The SLab provides short term fellowships to humanities grad students for prototyping ideas. We have provided support for students to use 3D prints for presentations, and are helping a cardiovascular medical researcher print exercise equipment for mice. More examples are in the paper.

Collaboration with all departments expands the usefulness of the space beyond the physical location and engages the entire university, even humanities scholars.

Finally, outreach plays a major role in attracting any makers, especially interested humanities scholars.

The DSC provides workshops and training for all their equipment.

They also encourage staff and users to blog about successes and failures, and to publish results in journals.

The SLab holds workshops, has a prominent display case, and is a major stop on all the mandatory freshman library tours.

We also encourage users and staff to post their making on our blog. Publishing about the making, both successes and failures, encourages others to try; especially humanities researchers who may be afraid to fail.

I would like to conclude with four take aways that can help libraries make their makerspaces more approachable to humanities researches.

1st, have a passionate staff person in the makerspace. Skill level is less important, you can hire out or encourage student volunteers to bring in skill. But without excited library staff support, the space will flounder.

2nd, make the space physically accessible. Also think about how you can address mental and social barriers.

3rd, provide incentives to use the technology and space. Team up with Teaching and Learning centers. Provide free supplies and/or money.

Finally, use your library liaisons. They know your faculty and students, and they can proselytize the space. Bring them in for training on the equipment. Work with them on projects so they know what the space can provide and the tools can do.

IEEE VIS 2017: A SciVis Perspective

Since my (Robert)’s conference reports are almost entirely focused on InfoVis (and a bit of VAST), I have asked Noeska Smit, medical visualization professor and my collaborator in the Vis Potpourri postings, to write about VIS from the SciVis perspective. Everything below are Noeska’s words.

It’s been a while since I wrote a conference report. I used to write them for medvis.org regularly in the past. However, given the low amount of medical visualization papers at some of the conferences I attended, some would have ended up as very short reports indeed ;). From now on, I’ll still write conference reports, even if they may not only contain 100% medvis content, starting with this IEEE VIS 2017 report.

VIS 2017 took place in Phoenix, Arizona, and for me personally this was a bit problematic. The transition from rainy Bergen, Norway, to this hot arid desert climate gave me a lovely case of heat exhaustion, which unfortunately led to me missing some of the paper sessions. Less complaining, more reporting! Besides papers, I also wrote about tutorials, awards, parties, and meetups, but since Robert already covered that here, I will be sharing this on my personal blog.


I mainly attended the SciVis paper sessions (though some of the sessions did not feature a SciVis track at all!), when I was not too busy being sick ;), and will briefly write about some personal highlights of these sessions.

Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization by G. Elisabeta Marai addresses the interesting problem of domain characterization for (scientific) visualization. Liz talked about shifting from human-centered design to activity-centered design. She presented a model for this which adds functional specs to the famous Nested Model proposed by Tamara Munzner.

The Activity-Centered Design Model

The Good, the Bad, and the Ugly: A Theoretical Framework for the Assessment of Continuous Colormaps by Roxana Bujack, Terece L. Turton, Francesca Samsel, Colin Ware, David H. Rogers, and James Ahrens deals with how to assess continuous colormaps. People often speak of perceptually linear, but exact definitions of this concept vary depending on who you ask. In this paper, an overview of literature on the topic so far is presented, along with guidelines for colormap design. As the icing on the cake, there is also an accompanying online tool: colormeasures.org, which lets you check if your colormaps are good, bad, or ugly ;).

Interactive Dynamic Volume Illumination with Refraction and Caustics by Jens G. Magnus and Stefan Bruckner presents a volume rendering solution which allows for real-time refraction and caustics. This is the stuff that makes glass look like glass, and that was so far not possible to do dynamically with on the fly parameter adjustment for volumetric datasets. Jens presented the results of his master thesis on this topic (this trend of master students making awesome VIS papers needs to stop, they are making me look bad ;)). Since I love the sound of my own voice, here is a short video that shows the technique in action:

A Virtual Reality Visualization Tool for Neuron Tracing by Will Usher, Pavol Klacansky, Frederick Federer, Peer-Timo Bremer, Aaron Knoll, Alessandra Angelucci, Valerio Pascucci proposes a VR tool for neuronal tracing in volumetric datasets. They argue that segmentation of neurons from volumetric datasets is currently often performed in 2D slices or on a 2D screen, while this task is inherently 3D. They compared their 3D VR approach with the currently used tool by the domain experts, and got good feedback. Their website has a video and more.

TopoAngler: Interactive Topology-based Extraction of Fishes by Alexander Bock, Harish Doraiswamy, Adam Summers, and Cláudio Silva deals with the problem of extracting fish from volumetric micro-CT scans. Researchers scan multiple fish squished together to save scanning costs, but it is not straightforward to segment out individual fish afterwards. Alex Bock, after a brilliant Fast Forward full of fish puns, presented the tool they developed for this purpose, which is also available on Github.

The Topology ToolKit by Julien Tierny, Guillaume Favelier, Joshua A. Levine, Charles Gueunet, and Michael Michaux presents, as the title may suggest, a powerful toolkit for topological data analysis in scientific visualization. The toolkit is available on Github, and even comes with VTK bindings (for those of you that are not ‘allergic to VTK’ as Julien mentioned ;)).

In the non-SciVis-track-department, I really enjoyed two talks especially. Functional Decomposition for Bundled Simplification of Trail Sets by Christophe Hurter, Stéphane Puechmorel, Florence Nicol, and Alexandru Telea dealing with graph edge bundling based on function with some very nice DTI brain examples (secretly a SciVis paper?).

DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks by Nicola Pezzotti, Thomas Höllt, Jan van Gemert, Boudewijn P.F. Lelieveldt, Elmar Eisemann, and Anna Vilanova deals with involving user interaction and visual feedback in the design of Deep Neural Networks.

I notice from my summary that there not all that many ‘traditional SciVis medical visualization papers’ to write about. However, I did observe that closely related topics such as visual analytics in healthcare are very popular. I would love to see this field of research considering more imaging data in their approaches. The combination and integration of spatial and non-spatial medical data visualization seems to be a promising area of research. In conclusion, despite the heat, I really had a great VIS, and am definitely looking forward to VIS 2018, which will be held in Berlin, Germany.

See also: