Who Makes Those Great NY Times Maps, Anyway?

If you follow this blog much, you know that I have a fondness for the New York Times, especially when maps are included in their coverage. I've mentioned it here and here and here and here. Now I know who GETS the great job of making all those terrific maps, Matthew Bloch. Here's Bloch's web site, maps.grammata.com, with loads of those fun, interactive maps he's made as a graphics editor for the Times.

This map/article gives you head's up on where NOT to park in NYC.


Here is an interactive map of Beijing, showing photos, prior to the 2008 Olympics.


And finally, a point I've made before is that the Times rarely gives credit to their mapping expert or that software that the maps were made on...GIS! Take a look at Bloch's 'bloopers' page, his mapping accidents, as he calls them and you can spy a couple of references to ArcGIS software.

This graphic has the description "1917 map of Beijing (after trying to use spline-based georeferencing in ArcGIS)"

And this "U.S. states (Shapefile, opened in ArcGIS)" Ewww!


Thank you, geoparadigm for tweeting this link.

Why Creating A New Word For Reading On Screen Is A Terrible Idea…



Multi-column layout: better than a poke in the eye with a sharp pixel...

Dan Bloom is a journalist who currently lives in Taiwan. Over the past few days, he's generated a flurry of activity on this blog, my Inbox and on FaceBook, with a suggestion that we need to create a new term to describe the activity of reading onscreen. He suggests the term "screening". (See the comments on my previous post: Paper Dies - But Reading Lives: The Richness of Future Web Reading )

Dan was also very enthusiastic about the multi-column layouts I've been experimenting with on my website, and wants to know if there are free templates anywhere he can use, for example so he could read his email in multi-column.

He asked for my opinion on the term "screening". So here it is:

Creating a new term for reading onscreen is not only unneccessary, but actually counter-productive.

However, Dan's heart is clearly in the right place, so rather than just respond with another in a string of comments, I decided to escalate the topic and make it the subject of this post. (It's my party, and I'll blog if I want to...)


First, the term "screening". IMO, that's like admitting defeat - that somehow "reading on screen" is different to "reading on paper". It's not. Yes, there are differences today. Reading on screen is not as comfortable as reading from paper. But it can - and should - be. Once it is, then all the advantages of digital information really start to pay off.

Imagine a conversation between two people, fifty years from now...

"How did they communicate information back in the old days?"

"Well, they'd plant trees. After 30 or 40 years of growth, they'd cut them down and transport them in hydrocarbon-burning vehicles to a place called a pulp mill. There, they'd mash them up with a load of chemicals (when they were done with the chemicals, they'd dump them in the nearest river).

"Then they'd roll and press the pulp into long sheets of "paper". They'd transport those (again, in hydrocarbon-burning vehicles) to a printing works, where they'd use huge machines to put dirty marks on the "paper", fold it, cut it up, and transport it (more trucks) to the readers, or "bookshops" where people would go to buy the information they wanted or needed."

Anyone really believe we'll still be doing that, 50 years from now? For any kind of information?

In the early days of automobiles, they were noisy, smelly and unreliable. In some parts of the world, you weren't allowed to drive one on the road without a man carrying a red flag walking in front of you as a warning to other road users.

People said the automobile would never replace the horse as the primary means of transport...

As far as reading onscreen is concerned, it's still the early days. It took about 400 years from Gutenberg to the Linotype machine. We've been doing onscreen reading for about 25 years - and it's only been even halfway bearable for about 10.

We don't need the man with the red flag any more, but the automobile is still noisy, unreliable - and stinks.

There's no reason it should be that way. All the technology we need to make reading great on a screen already exists, and could be implemented within a year or two. But the technology companies who make Web browsers, and the people who create Web content, have decided that fighting battles over market share based on "features check lists" is more important than stepping up and implementing a comprehensive plan to make real improvements for everyone who reads on the Web.

Technology companies don't "get" the importance of fixing reading on screen. Journalists do. That's why I'm really happy to see someone like Dan stirring up the waters here.

Journalists should be giving technology and media companies a hard time, along the following lines...

  • Reading and writing are still the primary means of human communication (because text is easiest to create).
  • Reading and writing are moving from "making and viewing dirty marks on shredded trees" to "making and viewing digital information".
  • Reading onscreen is still inferior to reading from paper.
  • What's your plan to make reading onscreen just as good?
  • What's your schedule for implementing that plan?
I'd like to see the answers they give.

Now, on the subject of templates for multicolumn layout. The short answer is: I don't have any, although you're welcome to use any of the HTML and CSS markup from my website.

But at the risk of repeating myself yet again:

  • Multicolumn layout is much more suited to the screen than single-column (because of the way human vision works)
  • However, it can't work without Pagination (who wants to scroll down to the bottom of one column, then have to scroll a long way up to the top of the next?)
  • There are many different sizes and shapes of screen. Information has to be paginated "on the fly" for each device
  • This requires adaptive layout. It's not rocket science - you can see it at work today in applications like the New York Times Reader. But no-one's doing it on the Web yet, although it's easily possible.
Fixing reading on screen is vitally important for the human race. You can instantly create the Library of Congress in a village in West Africa. Digital information can be easily translated into minority languages. Books will cost less. Information can be kept up to date. And so on, and so on.

I happen to believe that the first Web browser to do this properly will leave all the others sitting in the dust, wondering just where their market share disappeared to.

I see plenty of "features lists" from the browsers. What I don't see is strategic, long-term vision.

Welcome to the TILE project blog!

Here you’ll find the latest TILE news, as well as information about our project team, partner projects, and prototype and related tools. Be sure to visit regularly for project updates, or subscribe to the RSS Feed to have news sent directly to you.

What exactly is TILE? TILE stands for Text-Image Linking Environment, and it’s a web-based tool (or more properly a collection of tools) that will enable scholars to annotate images, and to incorporate them into their digital editions. TILE will be based primarily on the Ajax XML Encoder (AXE) developed by project co-PI Douglas Reside and funded through an NEH Digital Humanities Start-up grant. During the course of this project we will extend the functionality of AXE to allow the following:

  • Semi-automated creation of links between transcriptions and images of the materials from which the transcriptions were made. Using a form of optical character recognition, our software will recognize words in a page image and link them to a pre-existing textual transcription. These links can then be checked, and if need be adjusted, by a human.
  • Annotation of any area of an image selected by the user with a controlled vocabulary (for example, the tool can be adjusted to allow only the annotations “damaged” or “illegible”).
  • Application of editorial annotations to any area of an image.
  • Support linking for non-horizontal, non-rectangular areas of source images.
  • Creation of links between different, non-contiguous areas of primary source images. For example:
    • captions and illustrations;
    • illustrations and textual descriptions;
    • analogous texts across different manuscripts

We are especially concerned with making our tool available for integration into many different types of project environments, and we will therefore work to make the system requirements for TILE as minimal and as generic as possible.

The TILE development project is collaborative, involving scholars from across the United States and Europe who are working with a wide variety of materials – ancient and modern, hand-written and printed, illustrated, illuminated, and not. This project has the potential to help change not just digital editing, but the way software in the humanities is developed and considered. Many tools created for humanists are built within the context of a single project, focusing either on a single set of materials or on materials from a single time period, and this limits their ability to be adapted for use by other projects. By design, our project cuts across subjects and materials. Because it will be simple, with focused functionality, our tool will be usable by a wide variety of scholars from different areas and working with a variety of materials – illustrations and photographs as well as images of text. Therefore we have brought together several collaborators from different projects with different needs to provide advice and testing for our work: The Swinburne Project and Chymstry of Isaac Newton at Indiana University-Bloomington, the Homer Multitext Project at Harvard’s Center for Hellenic Studies, the Mapas Project at the University of Oregon, and various projects supported through the Digital Humanities Observatory at the Royal Irish Academy, Dublin. As TILE becomes available, we will be seeking additional projects and individuals to test its usability. Watch the TILE blog for announcements!

TILE is a two-year project, scheduled to run from May 2009 through May 2011. Funding for TILE is provided by the National Endowment for the Humanities, through the Preservation and Access (Research and Development) program.

If you have any questions please leave a comment below or write to us at TILEPROJECT [at] listserv [dot] heanet [dot] ie. Thanks for visiting!

“Summer Camp For Archivists” Sounds So Much Better

Crossposted to thesecretmirror.com.

I’m staying with colleagues and good friends during my week-long stint in Charlottesville, Virginia for Rare Book School. If you’re here – particularly if you’re in my class (Daniel Pitti’s Designing Archival Description Systems) – let me know. I’m looking forward to a heady week dealing with descriptive standards, knowledge representation, and as always, doing my best to sell the archives world on Linked Data. Notes and thoughts will follow, as always, on thesecretmirror.com.

The New Liberal Arts

I came across this new "course catalogue" for the New Liberal Arts. I regularly think that the liberal arts needs an overhaul, a new way of thinking and teaching, and certainly an critical analysis of the curriculum. As it is described by the writers, this manual "began as a blog. That’s the twenty-first-century way of saying it began as a conversation. ... This is the idea, roughly: to collectively identify and explore twenty-first-century ways of doing the liberal arts." I'm for that!

In this booklet, they've compiled some twenty or so course descriptions for the "new" liberal arts. Here is the one I want to report about here, a course simply called Mapping, by Jimmy Stamp.
"Which better explains the landscape: maps or photographs? There’s no longer any reason to choose. The potential now exists to create visceral, photo-integrated maps that are able to successfully communicate the urban conditions such as "fractalization." Applications such as Google Maps increasingly change the way we see, understand, and describe our environment. Cameras with geo-tagging capabilities afford us the opportunity to embed photographs into digital maps, resulting in something that’s more than a record of place; it is a record of time. Moments are mapped and universally accessible; a shared global consciousness arises via shared cartography. The personal becomes public while public space becomes personalized."
Find the whole New Liberal Arts booklet here. Read it. Share it. They want us to.