Centeno began the talk by describing the origins of his interest in globalization, about 11 years ago, about the time of Thomas L. Friedman's first publications on his theories about the relationships between nations (The Lexus and the Olive Tree, 2000 and The World is Flat, 2004). Centeno said it occurred to him that there were many ways to frame the subject of globalization, and that the process, in fact, had been going on for thousands of years. How, he wondered, was the best approach to grasp the complexity of the concept without resorting to banalities--and what was the best way to diagram information as complex as that describing global trade?
Centeno's first attempt to answer that question was to develop the International Networks Archive, (INA), where he used graphic arts, among other things, to try to depict complex relationships in easy-to-understand ways. Using some common reports published by the United Nations, he used trade data to support the generation of diagrams that showed some stunning conclusions about global transactions. Centeno calls these images "infographics." An example, The Magic Bean Shop and The Fries that Bind Us, are two of the diagrams in the INA collection. They show the effects of McDonalds and Starbuck's franchises on global trade. This diagram, he noted has been the most popular on the site, having been reprinted multiple times as an example of the sort of trends the INA is best at describing.
The fries that bind us? A diagram showing the effects of Starbuck's coffee shops and McDonald's restaurants on world trade. Image copyright 2003, INA.
"Globalization is nothing more than a complex series of transactions across the planet," said Centeno, alluding to the strong connections that can be made by analyzing trade data. "Most of these data sets are available publicly," he noted, showing a table that tracks the annual number of minutes spent in phone communications between countries. Data about the imports of movies, books, as well as trade data, are among the many other ways to show how these transactions take place through what seems like simple exchanges.
The INA project was followed by the "Mapping Globalization," where data was visualized in three distinct ways.
The first section of the Mapping Globalization site contains a collection of maps, and links to maps of various kinds: these include historic maps, interactive maps, and modern satellite imagery that help to convey the notion of geographic location as a critical, but often overlooked aspect of globalization. "Globalization involves connections between at least two places," the website explains, "and the first step in our understanding must be an appreciation of what this means in a concrete sense of place."
The second, and least developed, section of the "Mapping Globalization" site is the "Narratives" section, a series of animated movies that show general trends in globalization over time, such as "Migrations" and "Empires."
Finally, the "Data and Analysis" section uses diagrams generated by technology from NetMap Analytics, which creates diagrams showing the density of trade between nations. Using data from GKG trade statistics, NetMaps are circular diagrams that show relationships between various countries, grouped by continent. Thresholds can be set on the data depicted to clarify the diagrams. For instance, setting a threshold of f "0.3%" means that links corresponding to a trade share less than 0.3% of the total dollar value in the category are not shown in the diagram.
Despite best efforts at the time, there was no way for the NetMaps to be generated dynamically on the website, however images of several of the most interesting patterns can be found in the section of the site called "NetMap Combined Studies."
The talk next focused on a project undertaken by Manish Nag, a graduate student in the Department of Sociology at Princeton who now studies with Centeno. Nag explained his past career as an IT consultant, and his first interest in studying globalization at Harvard, studying with Jason Beckfield. At Harvard, Nag worked on a project called Sonoma, as a way to visualize statistical data using maps. When he came to Princeton to continue his studies, he began to work with Centeno on making an interactive database that would allow anyone to diagram world trade relationships. The result was the MapTrade project.
MapTrade, still in beta, shows various projections of a world map (Robinson, Winkel Trippel, Gall-Peters, or equirectangular are the map views that the interface supports). Trade flows can be diagrammed on top of the world projections, showing trade between selected nations, based on specific commodities, or all trade between all nations. Trade data is available for 1980, 1990, 2000 and 2009.
Using the interface, it is possible to save generated maps, so that comparisons can be drawn, and the results saved for use in research and presentation. As with the earlier NetMaps projects, filters can be applied to clarify the data by setting thresholds, or by limiting the transactions by their total percentage of world trade.
Centeno and Nag used the MapTrade interface to generate a series of maps, showing the shift in trade centers over time.
A diagram showing the top 75% of trade in wheat among all nations, 1980. Image generated by MapTrade.
A diagram showing the top 75% of trade in wheat among all nations, 2009. Image generated by MapTrade
The audience then requested several maps showing various commodities, countries and time periods.
Who knew so many fish sticks were traded between the U.S. and China in 2009? That the top 50% of word trade involves only 10 countries? You may have suspected these things; MapTrade can draw you the picture to prove it!
A future phase of Centeno and Nag's collaboration will include making the NetMaps data interactive, much in the way that MapTrade currently is, so that users can generate and save their own diagrams.
Links to all three of the projects discussed in today's talk can be found at:
Will Howarth, Professor Emeritus of English at Princeton, spoke to a large Lunch 'n Learn audience on February 16 about how he uses his iPad as an essential companion to reading, writing, research and travel.
Howarth began the talk by describing his long search for a lightweight, portable device that would be convenient for use while writing and traveling. From small-format computers of various vintages, to PDAs, Howarth has found the iPad to be the best solution to date. Its light weight (24 ounces), long battery life (approximately 10 hours), responsiveness, and the availability of useful applications have made it one of his favorite tools for productivity.
Howarth showed the basic mechanics of navigating several iPad screens, and using the screens to organize applications by function. He also demonstrated how to customize the persistent tool "dock" that appears on all screens, useful for storing one's most commonly used applications.
Howarth's preferred layout is to have news and information applications on the first screen of his device, writing tools on the second, and on the third screen, a miscellaneous assortment of apps that are either not fully tested, or exiled as being of secondary importance.
Citing the limitations of the virtual keyboard on the iPad's touch screen for someone with larger hands, Howarth showed his solutions in the form of two Bluetooth keyboards that can be synced to the device to allow typing on a more conventional set of keys. One of the keyboards was integrated into a small carrying case. The other, more suited for desktop use, was a compact stand-alone keypad that allowed for typing on full-sized keys. Another limitation to the iPad is the lack of a USB or other data port that would allow for easy file transfer via portable storage media. However, since several of the applications that Howarth customarily uses have mechanisms to sync and share files among several machines, this shortcoming has been largely overcome by application developers. Howarth proceeded to describe and share his favorite iPad applications for writing and research with his audience.
Author's note: Although the talk was cut short owing to time constraints, Professor Howarth was kind enough to share his notes with me. This post contains material that may not have been presented in the talk, or was mentioned only briefly last Wednesday.
Safari (included with the iPad) is the browser included on all machines using the Apple iOS. Safari on mobile devices can be customized for fast browsing, for bookmarking popular destinations, and customized to take advantage of the highly portable nature of the iPad. Howarth demonstrated how he has tailored his particular Safari toolbar so that he has research tools, particularly remote access to scholarly research collections including Princeton's Library, available at his fingertips. Among the headings in Howarth's customized list of bookmarks are Reference tools, Authors, and Libraries.
Wikipanion (free in the app store) is a tool designed to optimize searching, navigation, and display of entries in Wikipedia. The tool's graphical display of a Wikipedia entry includes a sidebar outline of main headings in a Wikipedia entry to facilitate navigation and exploration, as well as contextual links to related topics.
Google Earth (free in the app store) is a portable version of the popular desktop application, made even more stunning by the iPad's high resolution screen. The application includes all of the features and imagery of the desktop version, with the added ability to find your own location on the globe using the built-in GPS features of the iPad. A good companion to travel, Google Earth, like Google Maps (included with the iPad) can help to find local landmarks, businesses and cultural locations.
The National Geographic World Atlas ($1.99 in the app store) is another application for maps, this time featuring high-resolution images of National Geographic's own distinctive cartography. The app features 3 different styles of maps, and can be zoomed down to the granularity of a satellite image focusing on a particular street or building. (Street-level maps are drawn from Bing satellite imagery.)
The Safari browser should be the first point of departure as a source for reference materials, as the bookmarks can be customized to point to many excellent online tools. Howarth recommends not buying too many reference apps until the potential of Safari is exhausted.
Things for iPad ($19.99 in the app store), also available in a desktop version for Macs, is a task manager that fits with the category known as "todo" apps. The app allows you to enter notes, projects, and due dates, in an easy-to-use interface that syncs with the desktop version of the application. Since Howarth uses both versions, he finds it easy to set up lists at home, and have them automatically updated on the iPad. He uses the Categories to set up priorities and to schedule tasks, and uses the built-in lists for "Today," "Next," "Scheduled," and "Someday" to help keep him on track with deadlines.
DEVONthink To Go ($14.99 in the app store) is a companion program to DEVONthink and DEVONnote, both desktop applications for the Mac. The program can be used on its own, but according to the manufacturer "unfolds its full potential ONLY when used in conjunction with these applications. Howarth uses DEVONthink Pro Office and DEVONnote, and uses the applications together to save web clips, bookmarks, files for courses, notes on alumni trips he has led, and writing projects. A sync folder in the applications keeps the iPad version updated; conversely any changes on the iPad are reflected in the desktop versions at the next synchronization.
Bento for iPad ($4.99 in the app store) is a personal database program made by FileMaker Pro. It comes in a desktop version as well, and can sync with Bento 3 for the Mac. The database includes templates for many sorts of organizational tasks, from to do lists, to events, to household inventories, to expenses--even logs for diet and exercise. Howarth uses Bento at home on his computer, and uses the program mostly for listing addresses, book inventories, lists of films. The application, Howarth notes, can export and import spreadsheets in various formats.
These apps, Howarth noted, are best suited to those who are enthusiastic users of their desktop counterparts. For those who don't own, or intend to own the companion programs, similar functionality can be found in the Note-Taking applications, described below.
One major lack in this category of applications is one for organizing bibliographic references. Howarth told the audience he has been in contact with the makers of EndNote, a popular bibliography program among scholars at Princeton. They report that an iPad version of their database is currently in the works.
Writing begins with reading, according to Howarth--here are his favorite tools:
iBooks (free in the app store) is Apple's own e-book reader, with content purchased from iTunes. iBooks also has the ability to read PDF documents, which can be included in the library from email attachments sent to the iPad. Items in one's library can be viewed as book covers on a virtual bookshelf, or in list view, and it is possible to arrange collections within one's library. Howarth showed an 8-page PDF report written by one of his students that is now part of his iBooks library. The interface controls include adjustments for screen brightness, a search feature, and bookmarks. The interface also has an animated page turn feature, and a "scrubbing" progress bar to slide rapidly from one section of the book to another. Books can be annotated, but PDFs cannot. Although iTunes sells many popular current books, it also has many free offerings, mostly for books in the public domain.
Kindle, (free in the app store) an app that share the name of Amazon's popular e-reader, allows Kindle books to be read on the iPad and the iPhone. There are numerous versions of the Kindle reader, available for most portable devices, desktops, and a web-based version. Content for the app is purchased from Amazon.com, or uploaded by the user. The reader accepts .azw files, .mobi files, .rtf and text files, as well as PDFs. Howarth showed how to navigate his Kindle edition of Deep Creek, a novel he co-authored with Anne Matthews under the pseudonym Dana Hand. The Kindle interface turns pages with a swipe or a tap, and tapping on a word will simultaneously offer the options to highlight the word, make a note about the text, and , and to display the entry for the word in a built-in dictionary,--with links to related entries on Wikipedia and Google. Notes bookmarks and highlights are stored on Amazon cloud servers, and can be referenced and printed through the online interface. The Amazon Kindle bookstore has the most titles of any digital bookstore, including more than 25,000 free titles from Project Gutenberg.
Stanza by Lexcycle (free in the app store) is one of the first e-readers ever made, and has been recently acquired by Amazon. Less sophisticated than the other two readers mentioned in this section, it offers annotations, bookmarks, search, and reverse black/white screen view. Stanza is backed by a library of more than 100,000 books, all of them free.
Working across e-readers can be problematic owing to the fact that formats, citations, annotations, and page numbering are not standard, which as Howarth notes, is a major headache for scholars. One bright note on this topic is the recent announcement that Amazon will include references to the pagination of the print edition on which the Kindle edition is based, which will allow more accurate citations and place finding for readers who are using both paper and digital editions of books. Apple's threatened restrictions on books purchased from non-Apple apps also has caused some worry among consumers.
Among the three readers discussed here, Howarth declares Kindle the winner, because it is the most affordable and flexible platform for reading.
These applications are ideal for taking, sharing and synching notes with other machines. In some cases, they can provide an alternative for the Database applications listed above. There are hundreds of such apps available for the iPad; here is Howarth's selected list. Some of these applications have a browser interface that will update information on your mobile device.
Index Card ($4.99 in the app store) is a simple non-linear writing tool for the iPad. It allows notes to be captured in an interface that resembles index cards pinned to a corkboard. Notes can be reordered, recolored, written, edited, and "stacked" into projects. Index Card exports a text file of your notes that can be read by most word processors. Howarth finds this a favorite tool for brainstorming, organizing, categorizing by color, and for organizing projects. He shares his cards via email, or using Dropbox.
PlainText (free in the app store) is a simple app for editing text on the iPad. It looks much simpler than Index Card, and does many of the same things. Sharing and syncing is done via a Dropbox interface. Howarth and other writers like it because it is simple, elegant, and has a very "paper-like" interface.
SimpleNote (free in the app store) is a note-taking app, that despite its name, is a little more complex than the other apps mentioned in this section. Howarth uses SimpleNote in conjunction with a Mac iOS application called Notational Velocity (a free, open-source download) that stores and retrieves notes. Howarth finds it a great way to type up quick or related ideas, which auto-sync to SimpleNote. There is also a browser application for SimpleNote that can be used to share ideas with others. There is no choice of font, and the user interface is less attractive than the other two options.
All three of these note-taking applications have unique strengths, but of the three, SimpleNote is the most versatile.
Notebook apps group items, sync them to cloud servers, allow for exports into various word processors, and allow entry of data either via a web browser or a desktop application.
Springpad (free in the app store) is an application that allows you to save notes, tasks, links, images, nearby places, barcode scans (from products, books or media), lists of things (movies, books, wines) in virtual notebooks that organize your materials by topic. It syncs via Springpadit.com to a browser interface that includes a web-clipping tool. Your notebooks can be shared with family and friends using Facebook or Twitter. Howarth likes the application for its organization and synchronization, and notes that it is a very good tool for working with groups. His notebooks, containing items related to Teaching, Writing, Travel, and Local topics were displayed against a background of a favorite picture.
Evernote (free in the app store) is probably the most popular notebook app for Apple devices. It stores many kinds of files including webpages, PDFs, text, links, audio files and images, and organizes them into notebooks based on project type. Each media type can also be geo-referenced for mapping and searching. Evernote syncs to Mac, PC, and web interfaces, and the desktop versions are also a free download. The "todo" functions of Evernote are quite good, and works best when used in conjunction with one of the desktop versions (also free). Monthly uploads of up to 60MB per month are free on Evernote; the premium version ($45/annum) allows for monthly uploads of up to 1 GB. The premium version also allows for read/ write notebook sharing with colleagues, whereas the free version is read-only for those you share with.
Howarth notes that other notebook applications allow writing and drawing and speaking instead of typing, but his recommendation is Evernote as the best notebook app.
PDF documents are part of the lingua franca of scholarly documents. There are several apps that allow PDFs to be read, annotated and shared on the iPad. Getting PDFs into your iPad can either be via a server, download, file-sharing via iTunes, or as an e-mail attachment
iAnnotate ($9.99 in the app store) as the name suggests is a tool made for annotating PDF documents ( PDF readers are more numerous.) The tool allows highlights, notes, freehand drawing or writing, bookmarks, stamps, underscoring, strike-through, and tabbed reading of multiple documents. The standard toolbars can be customized with a wide range of possible commands, and the program allows display through VGA out. Search is possible at the document level, or full-library. Markups can be "flattened" for printing and sharing in a way that preserves annotation as an image, or emailed "as is." Sync is possible through iTunes, Safari, email and Dropbox. The same company makes a desktop PDF companion for iAnnotate calld Aji PDF Service. Using the desktop program in conjunction with iAnnotate makes it easy to manage large libraries of PDF documents.
GoodReader ($2.99 in the app store) is another PDF reader/annotation tool. It allows sticky notes, highlighting, freehand drawing and writing, rubber stamps, underlining, strike-through, and shapes such as arrows, boxes, ovals, and others that can be used to draw attention to sections of a document. Transfer and sync can be done via MobileMe, iDisk, Google Docs, Dropbox, SugarSync, box.net, and WebDAV and FTP services. The application is most versatile in the document types it can read: not only PDF, but MS Office, iWork, HTML, image and audio and video files can be used with this application.
Papers for iPad ($14.99 in the app store) is mainly for scholars of science. Although the app is a PDF markup tool, allowing highlighting and notes, and emailing annotations, the chief benefit of the app is the built-in search engine that allows you to find and download PDF articles in the following databases: CM, NASA-ADS, arXiv, Google Scholar, IEEE Xplore, JSTOR, Pubmed, and Web of Science. There is a desktop version for the Mac that can be used for synchronization, but it also works with Dropbox, iDisk, iTunes and email. PDFs are stored on your iPad, so you need at least 100MB of free space. A limitation in the current version is that although documents are synced between the mobile and desktop versions of the app, your annotations are not.
GoodReader is a good value for most PDF use, and also works with other document types. iAnnotate has more markup features, and the advantage of VGA-out. Papers is invaluable for a researcher who commonly uses the scholarly databases supported by the application.
Storage on Cloud Servers
Getting documents on and off the iPad, keeping them up to date, and sharing them with people, other applications, and devices relies mostly on wireless forms of document transfer. Cloud servers perform an important function in achieving this goal.
From the numerous times that Dropbox (free in the app store) is mentioned in other entries, you may have concluded that it is a very popular program for file sharing. Dropbox is available for desktop and mobile devices, has a built-in public html file for sharing, and a photo file for making automated slide shows you can send to other people. Using any of the Dropbox interfaces syncs to all others. The free service is up to 2 GB, and the next upgrade takes you to 50 GB for $99/ year.
MobileMe iDisk (app is free in the app store, but a MobileMe subscription is required) is a popular Apple service that allows you to view and share files from a number of devices. File types from iWork, Microsoft Office, PDFs, QuickTime movies, JPEGs and more, are supported, however files larger than 20MB may not be viewable on all devices. The iDisk has both public and private folders to facilitate sharing. Paid subscribers of MobileMe who have legacy iPhones can subscribe to a service on MobileMe that will find their lost or stolen iPhone. Owners of the iPhone 4, iPad, or fourth generation iPod touch with iOS 4.2 or higher can get this service with a free account, but storage space still costs money.
Air Sharing ($0.99 in the app store) allows you to mount your iPhone, iPad or iPodTouch as a wifi drive on your computer. It works with Mac, PC or Linux. Mounting your mobile device as a remote drive allows you to drag and drop files between devices for syncing and sharing. Documents can be viewed and emailed. The app also allows you to mount other web-based servers such as MobileMe iDisk, Dropbox, Box.net, WebDAV, FTP, FTPS, and SSH/SFTP, and allows downloads of files from the web. Air Sharing can zip and unzip files, print to printers shared by Mac OS X 10.5 and above or Linux. It has an advanced image viewer for hi-res images, and an PDF viewer that supports large, structured PDF files. There's a long list of viewable file types that includes most office applications and media files. The HD version is made especially for the large display of the iPad; the same company also makes a fun app that allows you to turn your Apple device into an extra computer monitor.
Dropbox is the Esperanto of file sharing apps, and you should have this one. Other cloud services can provide extra features.
iWork for mobile devices started a revolutionary trend in office-type applications. Rather than buying bundled software that includes a word processor, a spreadsheet program, and a presentation program, as is typical, Apple decided to market these applications separately for the iPad. Each app costs $9.99. The unbundled desktop version costs $19.99 each for the same three apps.
On the iPad, files can be shared using email, iWork.com, iTunes, MobileMe iDisk, or WebDAV.. There is one-tap AirPrint available on all three apps that allows for automatic printing on any AirPrint-enabled printer.
Howarth describes Pages as his favorite word processor, one he customarily uses on both the iPad and his Mac to share files with MS Word users. The iPad interface is described by Apple as "the most beautiful word processor ever designed for a mobile device." They may be right.
Keynote is Apple's version of PowerPoint, and in Howarth's opinion, is in many ways better. Presentations are easy to build, and sync between devices (although fonts can be an issue). Keynote is one of the few Apple apps that works with the VGA-out feature of the dock connector on the iPad, which makes it possible to use the iPad as a display, as well as editing, device for Keynote presentations.
Apple's spreadsheet app, which Howarth says he uses mostly for grade sheets, and built-in formulas to make calculations easy. The app has many built in design features so that spreadsheets look less like boring tables, and much more like a polished publication.
These apps make the iPad a viable laptop replacement. An external keyboard is almost required to get the most out of them, but the applications cost so much less than expected, you can use the money you save to get a fancy iPad case with an integrated keyboard that makes typing a breeze.
According to Howarth, the iPad is a lot more than entertainment -- the constant evolution of apps have made it into a valuable tool for writing and research. New, useful apps are emerging everyday to extend the usefulness of this device.
Howarth concluded his presentation with this video, which he said, makes it clear that research is "the coolest, sexiest work on the planet."
Scrivener, an innovative software package for writers, was the topic of last week’s Lunch ‘n Learn, led jointly by Professor Will Howarth, Professor Emeritus of English at Princeton, and Jon Edwards, who has recently retired from Princeton’s Office of Information Technology. Howarth and Edwards spoke of their enthusiasm for this fairly recent tool, with Howarth demonstrating the latest version for Macintosh computers (Scrivener 2.0), and Edwards using the new beta version for Windows (Scrivener Beta 1.4).
The idea for the software, Professor Howarth explained, was conceived in 2006 by Keith Blount, a primary school teacher from England turned self-taught programmer, because he was frustrated by the capabilities of existing commercial word processors. Blount wanted to design a different set of writing tools to support his ambitions for writing fiction. His vision for a new type of writing tool became a reality when the first version of Scrivener for the Mac was released in January of 2007. A beta version of Scrivener for Windows was released in November 2010 to coincide with National Novel Writing month. Blount’s software firm, which now employs 4.5 full time staff members, is called Literature and Latte; Scrivener is its sole product. Although entire documents can be written and formatted in Scrivener, the program is really designed to help with more creative aspects of writing than just typing words and making them look good on a printed page.
Scrivener was described by Howarth as being part “content-generation tool” and part “idea-and-structure processor.” Scrivener deals with all aspects of a writing project from first ideas, to research links and notes, to outlining, structuring, and eventually, composing and editing a document. Scrivener-created works can later be exported to a traditional word processor for final polishing and formatting. Apart from supporting common word processor formats such as .DOC, .DOCX, .RTF and HTML, text can also be translated to e-book formats such as ePub, a standard platform, .MOBI, a non-proprietary format that can be read on the Amazon Kindle, and PDF. It isn’t only this multi-platform flexibility in file types that sets Scrivener apart from other writing tools. By design, the software attempts to follow the creative process that takes place before writing begins, starting with half-formed ideas and sketchy notations; the writer then proceeds with research, composing and organizing, adding to and editing these beginnings into a more complete work. Although the production version of the Mac edition of Scrivener has only been around for a few years, it has already become the top choice of many professional fiction writers, particularly in the United Kingdom.
Howarth demonstrated the software interface, showing its three-part workspace: there is a binder pane (a collection of all written parts and research material for a particular work), a central editing pane (where writing and edits occur), and an inspector pane on the far right of the screen, where metadata and other information about items in the binder can be entered and viewed. Pre-existing templates for several specific types of writing are included in the software: screenplays, novels, short stories and non-fiction, are several examples of templates that contain formatting commonly required by publishers and producers of such works, particularly those in the UK. The scriptwriting template, for example, has many of the standards required to submit such works to the BBC, as well as being a general guideline for standard script formatting.
Howarth demonstrated many ways to view an existing work in progress in Scrivener, showing both a traditional outline format, as well as one that represented the outline as if each part was an index card pinned to a corkboard. In either view, highlighting and dragging one part of the work to a new position in the outline structure, or on the pin board, caused the document to immediately reflect that change in organization.
Screen shot showing the Scrivener "corkboard" view. (Note: this image shows the interface for Scrivener for Windows Beta 1.4).
Using an e-text version of Walden by Henry David Thoreau, taken from the Project Gutenberg online repository, Howarth showed how easy it was to break an existing long work into component parts. In the case of Walden, Howarth quickly divided the book into its published chapter structure, by using search terms and keyboard shortcuts. He also demonstrated how search results of certain terms (searches that look both in the work’s text and all of the research materials in the binder) resulted in saved collections or smart folders that can be used for later reference. Expanding upon the visual strengths of organizational tools in Scrivener, Howarth even color coded each chapter of the Walden document to reflect the seasons of the year described in the narrative. This resulted in a handy way to group chapters by Spring, Summer, Winter, Fall, and back to Spring, in the same way that Thoreau organized his account of a year’s life in the woods. Using the same Project Gutenberg file as research material for a new Scrivener project, Howarth showed how he was able to adapt Thoreau’s work into a correctly formatted screenplay, using the templates already built into Scrivener as his guide.
The e-text of Walden and other supplemental files that Scrivener can save in the course of working on a project serves to illustrate how external documents and files can be organized for easy reference and later citation. Research materials saved in Scrivener can include web sites, images, notes and bibliographic references. EndNote field codes (also known as “Cite While You Write”) are placeholders for including properly formatted bibliographic citations in a written work. These codes are supported by Scrivener.
Howarth described his Scrivener workflow-- from using storyboarding and notation software on the iPad to capture ideas (the Index Card and Simple Note apps), synchronizing those notes with Scrivener, working on the document in Scrivener, and later exporting to Apple’s Pages software, or Nisus Writer Pro for the Mac (an RTF text editor; Scrivener supports RTF) for final formatting. The end result is a finished file that can be shared with publishers via Microsoft Word. Howarth described how this process helped him to collaborate with co-author Anne Matthews on their latest work Deep Creek, published under the pseudonym Dana Hand. Howarth and Matthews were both able to seamlessly share files and resources using Scrivener in the planning and writing phases of their work, and later delivered the finished novel in the .DOC format accepted by their publishers, Houghton Mifflin Harcourt.
Coincidentally, Deep Creek, which has met with great critical acclaim, has recently been named one of The Washington Post’sBest Novels of 2010. What is next for the Dana Hand authors? Howarth showed a glimpse of a screenplay based upon Deep Creek that he was working on in Scrivener. Will this Dana Hand film be coming soon to a theatre near Princeton?
Howarth concluded his portion of the talk by reflecting on how his discovery of Scrivener, coinciding with the extra time afforded by his retirement, has allowed his writing to develop in directions he had never imagined possible in his earlier career. He informed his audience that he could not guarantee using Scrivener would make them all authors of best-selling novels—but that it would certainly help to make their writing projects easier and more enjoyable.
Jon Edwards next spoke of his experiences with the recently released version of Scrivener for Windows, software that is still in beta development. His new book on Gioachino Greco, a chess player active in the early 17th century, is due for publication in February; however, Edwards used parts of the completed manuscript to experiment with the new Scrivener software, and concluded that it might be a valuable research tool for future works.
During a recent trip to London, Edwards extended his experimentation with Scrivener into new research paths. He took the opportunity of his trip to explore the British Library’s extensive holdings on the history of chess, and used the beta version of Scrivener for Windows to begin organizing projects based on several topics in chess-related history.
Edwards described how easy it was to write using Scrivener, noting that for any author with a tendency towards writer’s block, the simple, almost playful, workflow in Scrivener, which captures initial notes, research items, web links, outlines and fleeting ideas, might serve to overcome any hesitation in putting ideas to paper. Edwards used Scrivener to begin outlining and researching a proposed work documenting the chess matches played at the 9th Chess Olympiad of 1950 at Dubrovnik, a tournament in which 480 games took place. Using Scrivener, he was able to save all of his notes, references, and writing about the event, including building a stored collection of photos and biographical information about each team taking part in the competition.
Edwards recalled participating in meetings of the Scholars’ Environment Committee, which took place at Princeton in the late 1980s. The mission of the Committee was to improve research methods for scholars in an environment where computer-based resources were becoming increasingly more important. One tangible result of the Committee’s work that year was an idea for the formation of a project would eventually be called JSTOR, the online resource for archiving academic journals, founded in 1995. However, the guiding phrase for the committee’s goals that year was, said Edwards, was the idea of taking the “search” out of “research.“ Scrivener, Edwards noted, in some sense does that, by allowing all the materials needed for the writing of a serious scholarly work to be gathered in one place; with the split-screen format used in Scrivener, it is possible to write in one pane, while viewing citations and other research materials in another. Cutting and pasting from one workspace to the next is quite easy, and Scrivener makes storage of many types of document and file types possible.
Much of the historical literature on chess, Edwards noted, was published between AD 800 and 1890, which means that many of these text have been digitized and are now available for searching and download via the Google Books interface. Having an entire text downloaded as a resource file in Scrivener is a great convenience for a researcher, said Edwards. Writing clearly about the history of chess involves gathering and presenting many types of information. These might include diagrams of chessboards, and lengthy notations that recount the history of a particular game. As an example, Edwards mentioned his interest in the subject of “The Troitzky line,” a classic series of moves that begin an endgame by using two knights against a pawn. The strategy can take up to 50 moves to achieve; documenting it can require extensive illustrations and explanations. One of the main benefits of Scrivener to him, said Edwards, is that all of his notes, documentations and diagrams are finally captured in a single environment, so that he can keep his supporting documents close at hand and organized by specific topic.
Edwards described his particular Scrivener workflow, at least as far as his experiments have taken him to date. He uses an online content management system, in this case Princeton’s WebSpace, to save the latest versions of his Scrivener files. He can then retrieve the files from anywhere using a web-based interface, and continue working without worrying about where he left the latest version of his project, or any of its supporting files.(Scrivener also has built-in support for syncing files with the popular Dropbox service.)
It is to be noted that the Windows version of Scrivener is still in beta, and is currently free until certain known bugs are fixed. For the moment, PC and Mac versions of the software don’t recognize the other’s files, and compiling documents into a final format using the Windows version has some documented issues. Still, in the short time the program has been available since November of this year, it has gone through several versions. The latest, version 1.4, said Edwards, shows significant improvements over earlier releases. While Scrivener may still lag behind more familiar word processing platforms in terms of document versioning and formatting, it is a particularly agile tool for the first stages of writing. “It’s an excellent brainstorming tool,” Edwards remarked, noting that other tools such as Microsoft Word, were designed for a corporate environments, and reflect the sorts of tasks required by business. Professional writers have very different aims and needs. Scrivener, thanks to the interests of its inventor, was specifically created for such writers and researchers.
Scriptwriter, poet, novelist, short story author or historian? You may want to check out Scrivener as a platform for organizing your next writing project.
The Mac version of Scrivener 2.0 currently retails for US $45. A 15% discount is available to academic users. There is a growing online community of Scrivener users who share their experiences and tips for greater productivity. The Windows public beta version is currently free to download, and is available here.
This session is the final Lunch and Learn of 2010. Check out the Lunch ‘n Learn schedule in early February for next semester’s program.
In this week’s Lunch ‘n Learn on Wednesday, December 1st, Matthew Salganik, an Assistant Professor in Princeton's Department of Sociology, presented some recent research that has resulted in the creation of an open-source polling site called www.allourideas.org. One of the inspirations for Salganik’s project came from an unlikely source-- the popular website, www.kittenwar.com, where visitors to the site vote on which of two randomly paired photos of a kitten is cutest. Given two competing choices--in this case photos of two cute kittens—this site rapidly gathers user opinions in a way that makes it easy to track social signals; the site uses a fun mechanism for gathering information, and allows any user to easily upload a his or her own kitten photos, thereby instantly entering new contestants into the competitive arena of cuteness.
Considering the popularity and broad appeal of the kittenwar site, Salganik reflected on standard forms of data collection that have been, (and still are), commonly used for gathering information in the social sciences. For many researchers, collecting information from the general population depends upon using survey mechanisms that have changed little in the last century. In this traditional method of data-gathering, researchers think of the questions they want to ask their survey audience well in advance of any feedback from the actual survey. Participants in the survey either take all of the survey -- and have their opinions included--or none—since partial data is rarely considered valid for the final results. Although in the 20th century, the mechanism for conducting surveys evolved from face-to-face, door-to-door polling, to random phone calls, to web-based research, this model of assessment has several unavoidable shortcomings. For example, one might ask "what important questions might the original survey have missed?" or, "how can the final interpretation of data be made more transparent to other researchers?" Focus groups and other open discussions methods can allow more flexibility in gathering input from respondents--as well as revealing why respondents make certain choices--but these methods tend to be slow, expensive, and difficult to quantify. Most significantly, all are based on the same methodology of the face-to-face survey, and are merely conducted with increasingly up-to-date and scalable methods of delivery. Web-based surveys admittedly reach many more people with far less overhead than did canvassing door to door, but are such computer-based surveys really taking advantage of the unique strengths of the World Wide Web? Kittenwar.comsuggested to Salganik that there was another, more intuitive way to present ideas and gather data on the web.
Using the model of Wikipedia.org as an example, Salganik remarked upon the internet’s strength in engaging people at their own level of interest. Wikipedia, he said, has become an unparalleled information aggregation system because it is able to harvest the full amount of information that people are willing to contribute to the site. Describing this phenomenon as "the Fat Head vs. the Long Tail," Wikipedia makes it possible to gather knowledge from people who have vastly different levels of commitment to improving the site. On one hand, there are those (fat heads) willing to spend days or months carefully researching and crafting entire Wikipedia entries -- while others, (long tails), are content to insert a missing comma into an entry they happen to be reading at the moment. As such, Wikipedia.org is an example of what might be achieved by an application that truly understands how the internet works best. Traditional surveys can only capture a tiny segment of this range of audience participation and engagement.
So what does the intersection of kittenwar.com and Wikipedia suggest to a researcher who wants to design a 21st-century web-native survey? Salganik's site,www.allourideas.org illustrates one solution: a model that takes advantage of the most essential quality of the World Wide Web – where, according to Salganik, "an unimaginable scale and granularity of data can be collected from day to day life." The development of allourideas.org--funded in part by Google.com and the Center for Information Technology Policy at Princeton University (CITP)-- uses the same” bottom-up” approach of kittenwar.com, paired with an algorithm developed by Salganik and his team, consisting of a single web developer, and several student researchers. The result is an open-source system where "any group, anywhere, can create their own wiki survey.”
Salganik describes the www.allourideas.org website as an "idea marketplace," designed to harvest the full amount of information that people are willing to provide on any given topic. Participants in a survey on the site are presented with random pairs of options, and pick the one they most favor; they then are given a second pair of different options, and vote again. Eventually, the most popular ideas -- either provided by the survey author(s), or submitted by any person voting on the site -- can be quickly identified.
The homepage of www.AllOurIdeas.org
An early version of the site was developed for the Undergraduate Student Government (USG) at Princeton, as a mechanism to assess the most important campus issues according to Princeton students. Voting began with ideas submitted by leaders in the USG, with additional suggestions submitted by students participating in the polling. In the end, two of the top five ideas that emerged as the most important to the student population were contributed by student voters, and were not among the ideas originally suggested by the USG. The percentage of participation in the poll was also remarkable: 40% of the undergraduate population took part, resulting in nearly 40,000 votes on paired ideas--as well as generating 100 new ideas not thought of by the original authors of the survey. Salganik and his team concluded that using this survey tool on an audience that is already engaged in the issues being presented can result in an incredible amount of quality added to the data generated. "In the old survey method," Salganik explained, "tons of data are left on the table." New methods of data collection, such as allourideas.org, are by contrast inclusive, from the bottom up, and reflect the effort, interest, and participation that engaged respondents are willing to contribute to the discussion.
Since its public release, www.allourideas.org has generated 700 new idea marketplaces and 6,000 new ideas, uploaded over the course of 400,000 votes. Users of the free web-hosted interface include Columbia University Law School, The Washington Post, and the New York City Department of Parks. Anyone with a few ideas and a target audience willing to provide feedback can make their own space for collecting and prioritizing ideas on the allourideas.org site. Results are returned to the survey authors with full transparency, including some basic demographics about the geographic location of voters, the length of participation in each individual voting session, and the pair of choices at which a participant leaves the voting. (Salganik explained that leaving a session is sometimes indicative of the voter's perception that their only choice is between two bad ideas, although in other cases, voters leave because they feel they’ve voted enough.) Voting is anonymous, and voters are encouraged to return to vote as often as they wish.
Salganik described some of the mechanics used to keep the voting fresh and current, such as weighting recently submitted new ideas with more frequent appearances in the polling to give them equal footing with older ideas. The polling mechanism is designed to handle a very large number of ideas, and the more people voting, the better the results.In future releases of the code, idea pairs might even be adaptive to prior choices made by an individual voter. It's important to the success of such a binary voting system, explained Salganik, that voters don't know previous results, because that ignorance avoids the mentality of the flash opinion. The ideal sized group for polling is at least 20 people, although any number of respondents can be accommodated. The poll currently being conducted by The Washington Post on reader feedback and participation is the largest to date on the site. At the time of this Lunch ‘n Learn, the poll had been open for 3 days, and had already generated more than 40,000 votes.
The concept behind www.allourideas.org consists of a few basic characteristics. The site is simple. It's powerful. It's free. It's also constantly improving. It proves, Salganik concluded, that when information is presented and gathered properly, there is wisdom, rather than madness, in the opinions of the crowd – and there needn’t be a cute kitten anywhere in sight.
Free "idea marketplaces" can be created by anyone on the hosted site at www.allourideas.org. If you are interested in creating a site, come prepared with a target audience and a few ideas in mind -- then invite your audience to begin voting and contributing their own ideas.
allourideas.org is also an open-source-code project. The code is available at github.com. You can also follow the project on Twitter and on Facebook.
All who listen to Jerry Ostriker, Professor of Astrophysical Sciences at Princeton University, come to know that we live in profoundly exciting times. We have learned only recently the age and composition of the universe, and for the first time, we are coming to understand how the galactic structures we observe throughout the sky came to be. Simply put, where do they come from, and how could they form if the early universe was relatively uniform? And how can we use them as standard objects unless we understand how and when they formed and how they evolved?
One of the key findings, said Ostriker at the September 29 Lunch 'n Learn seminar, came from the WMAP satellite. Its observations of the Cosmic Background Radiation show the beginnings of structure in the aftermath of the Big Bang.
Armed with our best cosmological models, asks Ostriker, "Can we start with those initial conditions and our understanding of the standard model of cosmology, add standard physics, compute forward and end with galaxies like those we see about us?"
From 50 years of observations, he tells us, we know that giant elliptical galaxies, galaxies that involve on the order of 100 million stars, form early and grow in size and mass without much late star-formation. He adds that major mergers are uncommon at later times or else disk galaxies would have been destroyed.
Using high resolution simulations of massive galaxy formation, he has computed the formation of cosmic structures. He begins by putting down particles on a dense grid with slight perturbations of the positions consistent with the early large scale structure given by the CBR. He then gives the particles small velocities consistent with the density structure and the continuity equation. He then uses the supercomputers at Princeton to calculate the accelerations of all the particles using Newton's laws.
The simulation updates again and again the positions and velocities and accelerations to find the new distribution of particles, all culminating with a video simulation of the evolution of cosmic structures.
Says Ostriker, "Looking backwards we have been able to reconstruct from the detailed structure of our own Galaxy and from the fossil evidence derived from the study of nearby galaxies a plausible history of how galaxies formed over the last several billion years. In addition, now that we have a quite definite cosmological model, providing us with a quantitative picture of how perturbations grew from very low amplitude Gaussian fluctuations, we can perform the forward modeling of representative pieces of the universe using standard physical processes to see how well we match our local knowledge and the time-reversed modeling based on the fossil evidence. Finally, we can employ large ground and space based telescopes to use the universe as a time-machine - directly observing the past history of our light-cone. While none of these approaches can give us at the present time results accurate to more than roughly the 5% -> 10% level, a coherent and plausible picture is emerging."
"Massive galaxies form in two phases. In the first phase, which peaks at redshift z = 6 and ends by redshift z = 2, cold gas streams in making stars in a small (<1kpc) region, but as the stellar mass approaches 10,11 Msolar, a hot bubble forms which suppresses further inflow of cold gas. But from redshift z = 3 to the present time, small stellar satellite systems are accreted at typically 10kpc from the center and the size of the total system grows by about a factor of three as the mass doubles. This added, accreted component is mainly comprised of old and low metallicity stars. Energy release from gravitational infall in various forms will terminate star-formation leaving the galaxies 'red and dead'. Even in the absence of feedback from SN or MBHs. This physical picture seems naturally to lead to the mass, size scale and epoch of galaxy formation and, increasingly, to a first understanding of the detailed internal structure of these systems."
Jeremiah P. Ostriker has been an influential researcher in one of the most exciting areas of modern science, theoretical astrophysics, with current primary work in the area of cosmology, particularly the aspects that can be approached best by large scale numerical calculations.
Ostriker has investigated many areas of research, including the structure and oscillations of rotating stars, the stability of galaxies, the evolution of globular clusters and other star systems, pulsars, X-ray binary stars, the dynamics of clusters of galaxies, gravitational lensing, astrophysical blast waves, active galactic nuclei, the cosmic web, and galaxy formation.
Most significantly, Ostriker's research focused on the theories of:
Dark Matter and Dark Energy
The Warm-Hot Intergalactic Medium (WHIM)
The First Stars and Reionization of the Universe
Interaction between Quasars and their surroundings
Ostriker has supervised and collaborated with many young researchers and graduates students. He is the author or co-author of more than 300 scientific publications.
By virtue of its mobility, portability, and ease of connectivity, wireless connectivity provides users with unprecedented freedom, suggests H. Vincent Poor, Michael Henry Strater University Professor of Electrical Engineering and Dean of the School of Engineering and Applied Science.
Wireless communications is among our most advanced, and rapidly advancing, technologies, he notes. New wireless applications and services emerge on an almost daily basis, and the number of users of these services is growing at an exponential rate. More than half of the world's population uses cell phones, and this is only one of a dazzling array of wireless technologies that have emerged in recent times.
At the April 21 Lunch ‘n Learn seminar, H. Vincent Poor, surveyed the technological landscape, some of its history and societal implications, emerging developments, and recent issues in wireless research.
Railroads reached near ubiquity in terms of the number of countries using the technology in 125 years. The telephone took nearly 100. Personal computers took 25 years. Remarkably, the mobile phone has taken just 15 years. More than just a personal communications device, it has become an engine of commerce in both the developed an developing world. Indeed, the technology has permitted countries in the third world to leapfrog the need for extensive land lines.
The results are extraordinary, says Poor. There are now more than 8 billion text messages a day, picture messaging has become standard, mobile gaming is growing, and video messaging has begun to emerge. We are approaching 5 billion cellular subscribers with explosive growth in wireless applications covering all key areas, from science and medicine, transportation and commerce, security and defense, through entertainment and social networking. And, as a result, it is a very lucrative business, accounting for more than $1 trillion a year.
The main challenge of wireless, notes Poor, is to provide the services familiar to wired systems, but with mobility. The challenges grow with higher capacity, and more simultaneous users in quickly moving vehicles. New 4G networks promise to provide reliable high speed connectivity for highly mobile users.
The one clear trend, says Poor, is the convergence of computing and communications. The cell phone, now an iPhone or an Android, is now both a computing platform and a communications device. In the years to come, he predicts, cars and homes will become nodes on the internet, inventories will be tracked automatically through built in wireless sensors, and we will habitually use a range of location-based and social networking services.
In his talk, Poor highlighted three areas of wireless research. In each, the application, or “pull” is matched by the “push,” interesting research at the physical layer, the theory and methodology of data transmission.
The first involves securing wireless transmission, a more complex undertaking in the absence of a physical infrastructure. It is possible to exploit the fundamental physics of the network, says Poor, to make them more secure. The idea takes advantage of the fact that individual network connections exhibit different physical properties due to the randomness of radio propagation. On-going research in this area involves coding theory, cryptography, game theory, and cross-layer network design.
The second research area involves sensor networks and distributed learning. Individual sensors within a wider grid measure a subset of large data sets, and each sensor can communicate with neighboring sensors to make optimal inferences about their physical surroundings.
The third research area involves the interaction of the wireless infrastructure with social networks, imposing a complex new structure. A famous problem in social psychology, the small world problem, suggests that any two people on the planet are separated by six degrees of separation. Small network analysis can model individuals and their local and long-range interactions. It turns out, says Poor, that if two people are separated by enough distance, you can conclude that they are separated by a fixed degree of separation and you can compute the figure based upon the size of the world and its population.
Speaker Bio: H. Vincent Poor is the Michael Henry Strater University Professor of Electrical Engineering at Princeton University, where he also Dean of the School of Engineering and Applied Science. His research interests lie in the area of wireless networking and related fields. Among his publications in these areas is the book MIMO Wireless Communications (Cambridge University Press, 2007). Dr. Poor is a member of the National Academy of Engineering, and is a Fellow of the IEEE, the American Academy of Arts & Sciences and the Royal Academy of Engineering of the United Kingdom. He received the 2005 IEEE Education Medal and the 2009 Edwin Howard Armstrong Achievement Award of the IEEE Communications Society.
The project aimed to explore the use of the e-readers in classes for which e-reserves were the primary readings. The printing of e-reserve readings at Princeton accounts for a large portion of printing in public clusters (total of 10 million sheets of paper last year). The e-reader pilot sought to target e-reserve readings and present them on an e-reader to see if printing could be reduced.
The pilot participants consisted of three faculty members, 51 students, and several administrators in the Library and the Office of Information Technology.
The three courses in the pilot involved considerable eReserve reading, all had some content in the Kindle store, and they had to be of a size that would permit the involvement of three courses. The courses in the pilot were Civil Society and Public Policy (Professor Stanley Katz, an undergraduate seminar), U.S. Policy in the Middle East (Ambassador Daniel Kurtzer, a graduate seminar), and Religion and Magic in Ancient Rome (Professor Harriet Flower, a graduate seminar).
Devices were given to students in September. The pilot was voluntary with opt-out possibilities at any time. One student opted out at the start of the pilot. No student opted out after the pilot began. Students were asked to do the bulk of the course reading on the Kindle. 95% of the students reported that they had not previously used an eReader.
Participants were asked to do pilot course readings on the e-reader without printing as much as they felt it was possible. The pilot concluded with a survey and some final focus groups in February 2010.
The goals of the pilot were to reduce the desire to print, to explore the unique strengths of eReaders, all while being careful not to affect adversely the classroom experience.
At the April 14 Lunch ‘n Learn seminar, Janet Temos, Director of OIT’s Educational Technologies Center, Stan Katz and Dan Kurtzer two of the faculty involved in the pilot, and Trevor Dawes, Circulation Director at the University Library reviewed the findings of the Princeton e-reader pilot and shared their experiences.
Temos reported that the pilot did indeed reduce students’ desire to print.
Students judged the screen size, image resolution, device weight and storage capacity to be excellent. Highlighting, annotating, navigating within and between books, and the dictionary features achieved much less positive evaluations. Overall, Temos reported, the students thought that the devices had promise, the reason they said at the end that none opted out.
Kurtzer noted that, in his graduate seminar, all of the students were expected to read the course material before coming to class. And so, while they may have experienced some challenges with navigation, those did not occur in class. He reported that all of the students liked the fact that they could carry all of their reading around all of the time.
Many of Kurtzer’s students have recently downloaded material from current classes to maintain the experience. Main criticisms included highlighting, keeping track of bookmark references, and moving between and among passages from different books.
One problem that the pilot addressed was the difficulty of working with pdf documents because you can’t enlarge the type size. The only surprise in the data, reported Kurtzer, was that the pilot appears only to cut down 50% of the students’ printing.
Use of the Library’s eReserve system has grown exponentially, Dawes commented. The pilot provided a good opportunity to test the use of the eReserves system on an eReader platform. For this project, the processing was different in that it was required to scan the pages individually, trimmed, and processed further by OIT staff. Early on, we discovered that the Kindle could not read pdf documents in their native format. The amount of staff time involved was large and, he concluded, would not be sustainable for the device. We will continue to monitor progress to see if new devices will be able to accommodate pdf’s more efficiently.
Professor Katz’s course involved 23 books. He emphasized that the device is superbly ideal to accompany travel, and he and students agree wholeheartedly with that assessment. That said, it was wholly inappropriate for the close textual work involved in the course.
Classroom discussion required that all students be looking at the same passages, and they were expected to annotate those passages. Annotations collapse into footnotes, the keyboard is tough to use, and the Kindle had built-in limits on the amount of text that could be highlighted and annotated. The tedious nature of finding passages caused consistent classroom confusion. All that said, he is off to San Francisco for a dissertation review. “I will load it into the Kindle, said Katz, “and love it once again.”
Janet Temos was trained as an architectural historian, and received degrees in art history from Williams College (MA 1992), and Princeton University (PhD 2001). She began working with the Educational Technologies Center (ETC), in 1993, and became a full-time member of the staff in 2000. She is now director of ETC, and continues to work with faculty who wish to use computer technology in their teaching. Current projects include courses on film, archaeology, medieval manuscripts, African languages taught in the US, and a collaborative project with the Princeton University Art Museum to develop an on-line repository of digital images of objects in the museum’s East Asian collection.
Daniel C. Kurtzer retired from the U.S. Foreign Service with the rank of Career-Minister. From 2001-2005 he served as the United States Ambassador to Israel and from 1997-2001 as the United States Ambassador to Egypt. He served as a political officer at the American embassies in Cairo and Tel Aviv, Deputy Director of the Office of Egyptian Affairs, speechwriter on the Policy Planning Staff, Deputy Assistant Secretary of State for Near Eastern Affairs, and Principal Deputy Assistant Secretary of State for Intelligence and Research. Kurtzer was a member of the American delegation to the Israel-Palestinian autonomy negotiations (1979-1982), helped negotiate the creation of the Multinational Force and Observers (1981-1982), negotiated and oversaw the successful arbitration of the Taba border dispute between Israel and Egypt, crafted the 1988 peace initiative of Secretary of State George P. Shultz, and in 1991 served as a member of the U.S. peace team that brought about the Madrid Peace Conference. Subsequently, he served as coordinator of the multilateral peace negotiations and as the U.S. Representative in the Multilateral Refugee Working Group. Kurtzer received several of the U.S. Government’s most prestigious awards, including the President’s Distinguished Service Award, the Department of State Distinguished Service Award, the National Intelligence Community’s Award for Achievement, and the Director General of the Foreign Service Award for Political Reporting. Ph.D. Columbia University.
Stanley Katz is president emeritus of the American Council of Learned Societies. His recent research focuses upon the relationship of civil society and constitutionalism to democracy, and upon the relationship of the United States to the international human rights regime. He is also a commentator on higher education policy. Formerly Class of 1921 Bicentennial Professor of the History of American Law and Liberty at Princeton University, Katz is a scholar of American legal and constitutional history, and on philanthropy and non-profit institutions. He is the editor of the Oliver Wendell Holmes Devise History of the Supreme Court of the United States and of the forthcoming Encyclopedia of Legal History (OUP, 2009). The author and editor of numerous books and articles, he has served as president of the Organization of American Historians and the American Society for Legal History and as vice president of the Research Division of the American Historical Association. He is a member of the Board of Trustees of the Newberry Library, the Copyright Clearance Center and numerous other institutions. He is a commissioner of the National Historic Publications and Records Commission. He also currently serves as chair of the American Council of Learned Societies/Social Science Research Council Working Group on Cuba. Katz is a member of the New Jersey Council for the Humanities, the American Antiquarian Society, the American Philosophical Society; a fellow of the American Society for Legal History, the American Academy of Arts and Sciences, and the Society of American Historians; a corresponding member of the Massachusetts Historical Society and an academico correspondiente of the Cuban Academy of Sciences. He has honorary degrees from several universities. Ph.D. Harvard University. Katz is director of the Center for Arts and Cultural Policy Studies.
Trevor A. Dawes is the Circulation Services Director at the Princeton University Library, where he is responsible for the circulation, reserve, current periodicals, stack, remote storage and Borrow Direct operations in the library. He previously held several positions at the Columbia University Libraries. Mr. Dawes earned his MLS from Rutgers University’s School of Communication, Information, and Library Studies, and has two additional Masters Degrees from Teachers College, Columbia University. He is an active member of the American Library Association and the Association of College and Research Libraries.
Princeton University has created a cyberinfrastructure, says Curt Hillegas, the Director of Princeton's TIGRESS High Performance Computing and Visualization Center, itself a collaboration between the Princeton Institute for Computational Science and Engineering (PICSciE). Developed within the past decade, this cyberinfrastructure consists of computational systems, data and information management, advanced instruments, visualization environments, and people, all linked together by software and advanced networks to improve scholarly productivity and enable knowledge breakthroughs and discoveries not otherwise possible.
At the April 8 Lunch 'n Learn seminar, Hillegas noted that the University's research computing activity has grown to keep pace with and to provide leadership for this international trend. Tigress maintains a vast hardware and storage infrastructure. And staff provide support for programming and for the new visualization facilities within the Lewis Science library.
The effort, of course, also involves faculty across many disciplines and departments. This session highlighted the work of two University faculty: Professor Annabella Selloni from Chemistry and Professor Clarence Rowley from Mechanical and Aerospace Engineering. The session demonstrated how computational science and engineering is enabling and accelerating scientific discovery.
Annabella Selloni’s research activity is aimed at obtaining a microscopic understanding of the property of materials with specific emphasis on surface and interface phenomena. At the Lunch ‘n Learn seminar, she discussed the quest to discover an efficient and perhaps less expensive alternative to platinum as a catalyst for the production of hydrogen. Princeton’s high performance computing systems have permitted her to model and to manipulate functionalized electrodes. At the seminar, she played simulations that illustrate how small surface changes can have a significant effect in the production of hydrogen.
Professor Clarence Rowley is modeling flows past a cavity, as would occur with a sun roof or an aircraft wheel well or weapons bay. Although his efforts have employed the processing power of a supercomputer, his aim has been to achieve workable results and a control design with a much more limited number of equations. Full systems require as many as 2,000,000 equations. Rowley now has control designs based upon just two equations. With such active control, it may be possible, for example, to mimic the fluid dynamics of insects and small birds and to design a controller to stabilize the leading edge of aircraft wings.
Hillegas concluded by inviting prospective users to apply to use the Tigress HPC resources. Users will find all the information needed to select the resources they need as well as information about applying for an account and time on the systems.
About the speakers:
After undergraduate studies at the University La Sapienza, Roma (Italy), Annabella Selloni graduated from the Swiss Institute of Technology in Lausanne-Switzerland (1979). This was followed by a postdoctoral position at IBM T.J.Watson Research Center, in Yorktown Heights (1980-1982). She has been Assistant Professor at the University La Sapienza in Roma (1982-1988), Associate Professor at the International School for Advanced Studies in Trieste, Italy (1988-1995), and Associate Professor at the University of Geneva, Switzerland (1996-1999). In 1999 she joined the Dept of Chemistry of Princeton University, initially as Senior Research Staff and Lecturer, and as a full Professor (since 2009). Her research interests are in theoretical and computational condensed matter physics and chemistry, with particular focus on the use of first principles electronic structure and molecular dynamics methods to obtain an atomic scale understanding of the structural and electronic properties of surfaces and interfaces, including organic-inorganic and solid-liquid interfaces, surface reactions and catalysis, photochemistry and photocatalysis. Prof. Selloni has over 160 publications in the area of theoretical / computational chemical physics. She is part of the Editorial Boards of the Journal of Chemical Physics and Surface Science.
Professor Clarence Rowley received his B.S.E degree from Princeton University, and his M.S. and Ph.D. from the California Institute of Technology. He joined the Princeton faculty in 2001, and he is currently an Associate Professor in the Department of Mechanical and Aerospace Engineering and an Associated faculty member in the Program in Applied and Computational Mathematics. His research interests involve modeling and control of complex systems, particularly fluids systems with specific areas of interest including modeling and model reduction for bifurcation analysis and control; numerical methods, both for fluids simulations, and analysis of dynamical systems; and applications of geometric methods in fluid mechanics.
Curt Hillegas received his B.S. in Chemistry from Lehigh University and his M.A. and Ph.D. in Chemistry from Princeton University. Curt is the Director of Princeton’s TIGRESS High Performance Computing and Visualization Center, a collaboration between the Princeton Institute for Computational Science and Engineering and the Office of Information Technology. He has helped to build a centrally managed research computing infrastructure that includes 65 TFLOPS of computational systems and 1 PB of shared storage as well as staffing for system administration, programming, and visualization support. He also serves on the Steering Committee for the EDUCAUSE Campus Cyberinfrastructure working group. Curt’s past work at Princeton includes managing the enterprise Unix group, architecting enterprise server and storage solutions, designing and managing central email infrastructure, and general Unix system administration.
The Technology Manager for the History Department at Princeton University, Carla Zimowsk has provided technical support for the department for 10 years. Not trained as a historian or a GIS expert, she draws upon graduate work in organizational communications and knowledge management. As a result, during the past decade, she has come to understand the needs of those she supports.
"The faculty all have stuff," she began at the March 24 Lunch 'n Learn seminar, "and it tells a story when pulled together." In a trip to the Visualization Centre at the University of Birmingham several years ago, she suddenly realized the importance of visualizing data.
On her return, she began to assist a steadily growing number of history faculty who are also excited about the use of such tools. In a Lunch ‘n Learn presentation on March 5, 2008, Professor John Haldon discussed the Avkat Project, a study of small fortress town near Armenia between the 5^th and 11^th centuries. Avkat uses the technology to assemble images, tax records, and even to predict where to dig. The result is a multi-disciplinary approach to a complete material culture and landscape evolution sequence from the Neolithic period through the modern day. Haldon has been able to calculate population densities primary dietary requirements, and estimate land uses.
In another Lunch ‘n Learn presentation on March 26, 2008, Professor Emmanuel Kreike showed how he was able to place fly-over maps from the 1940s with present-day satellite imagery to draw conclusions about deforestation and settlement over time in Namibia. His GIS databases also contain modern features, from roads and fences through buildings and wells, tax records, photographs, and even interviews with local inhabitants.
Modern Geographical Information Systems reveal relationships, patterns, and trends, not only about physical features, but economic and social phenomena. History Professor Rob Karl is using GIS to search for useful correlations in the international history of political violence. He has charted the distribution of major bandit groups and community action boards as well as changes in political affiliation in Columbia.
Professor Yair Mintzler is studying the defortification of German cities in the 18th and 19th centuries. Charting such events permits scholars to observe key trends. Mintzler plans another interesting use of information technology, the online replication of a German prison.
“What do historians do with computers other than use them as glorified typewriters?” This was the question that Zimowsk most often got from colleagues when she first started providing technical support in the history department in 1999. Ten years later, the question hasn’t changed much as some might now ask, “What do historians do with computers other than create PowerPoint shows for class?”
Both questions assume that historians only concern themselves with archival collection and recollection of dates, events, places, or people. By working with and observing these historians, she has learned that while they are interested in these individual facts, they are also interested in making connections and inferences from among them and in that, finding patterns, making comparisons, or trying to visualize and experience what cannot be seen, touched, or witnessed first-hand.
Speaker Bio: Before working in the University’s History Department, Carla worked for the Art Museum for seven years, and before that Graduate Admissions. Carla has degrees from Blackburn College in Music and a Masters in Communication and Information from Rutgers.
Imagine harnessing the power of the sun within a magnetic bottle. Unlike hydrogen bombs, which are essentially uncontrolled fusion reactions, scientists for decades have been pursuing the peaceful challenge of safely harnessing fusion energy, a potentially efficient and environmentally attractive energy source. Progress in addressing this scientific grand challenge, suggested William Tang, the Director of the Fusion Simulation Program at the Princeton Plasma Physics Laboratory (PPPL) has benefited substantially from advances in super-computing. At the March 10 Lunch 'n Learn, Tang noted that such capabilities continue to progress at a remarkable rate, from tera-to-petascale today, and to exascale in the near future.
If we can create the conditions for fusion to occur, says Tang, bringing deuterium and tritium together at very high temperatures, the reaction produces alpha particles, fast neutrons, and an energy multiplication of 450:1. It would then be possible to use that energy to heat the burning plasma in a self-sustaining reaction.
The Federal Government recognizes the importance of the effort, as evident, for example, in the Department of Energy document, “Facilities for the Future: A Twenty-Year Outlook.” Current Presidential Science Advisor John Holdren has commented that it is important to shrink the time scale for achieving fusion energy deployment by increasing appropriate investments in fusion research and development.
Tang pointed out that major progress achieved over the years in magnetic fusion research has led to ITER - a multi-billion dollar burning plasma experiment currently under construction in Cadarache, France. Seven governments (EU, Japan, US, China, Korea, Russia, and India) that represent over half of the world’s population are collaborating on this international effort led by the EU. Up to the present, laboratory experiments have produced 10 megawatts of power for approximately 1 second. The goal for ITER is to produce 500 million Watts of heat from fusion reactions for more than 400 seconds. A successful ITER experiment would demonstrate the scientific and technical feasibility of magnetic fusion energy.
Tang emphasized that the burning plasma experiment is a truly dramatic step forward in that the fusion fuel will be sustained at high temperature by the fusion reactions themselves. Worldwide experimental data and computational projections indicate that ITER can likely achieve its design performance. Indeed, notes Tang, temperatures in existing experiments have already exceeded what is needed for ITER.
Tang expressed the hope that American investments in Fusion Energy development will be able to keep pace with those of foreign countries and that it will be possible to deal effectively with political and associated financial constraints to achieve the kind of sustained support that the highly challenging research efforts will require. This will be essential for attracting, training, and assimilating bright young people that are needed to move the program forward.
The ITER effort will clearly require strong research and development efforts to harvest the scientific knowledge, which Tang pointed out entails a proper integration of advanced computation with experimental data acquisition and analysis together with fundamental plasma theory. Progress will be significantly aided by the accelerated development of computational tools and techniques to support the acquisition of the scientific understanding needed to develop predictive models which can prove superior to empirical extrapolations of experimental results. This provides the key motivation for the Fusion Simulation Program (FSP) - a new U.S. Department of Energy initiative supported by its Offices of Fusion Energy Science and Advanced Scientific Computing Research — that is currently in the program definition/planning phase.
Tang expects that the FSP will make unique contributions to the fusion program by addressing the integration challenges for multi-scale physics problems that are currently being mostly treated in isolation. The FSP approach will involve carrying out a rigorous and systematic validation program — that would enhance confidence in the reliability of the associated predictive models developed to improve our capabilities for reliable scenario modeling for ITER and for future devices.
Tang added that even more powerful super-computers at the “exascale” range and beyond will help meet the formidable future challenges of designing a demonstration fusion reactor (DEMO) after ITER. With ITER and leadership class computing being two of the most prominent current missions of the U.S. Department of Energy, whole device integrated modeling, which can achieve the highest possible physics fidelity, is a most worthy exascale-relevant project for producing a world-leading realistic predictive capability for fusion. This should prove to be of major benefit to U.S. strategic considerations for Energy, Ecological Sustainability, and Global Security.
William Tang is the Director of the Fusion Simulation Program at the Princeton Plasma Physics Laboratory (PPPL), the U. S. Department of Energy (DoE) national laboratory for fusion research. He is a Fellow of the American Physical Society, and on October 15, 2005, he received the Chinese Institute of Engineers-USA (CIE-USA) Distinguished Achievement Award. The CIE-USA, which is the oldest and most widely recognized Chinese-American Professional Society in North America, honored him “for his outstanding leadership in fusion research and contributions to fundamentals of plasma science.” He has been a Principal Research Physicist at PPPL and Lecturer with Rank & Title of Professor in the Department of Astrophysical Sciences since 1979, served as Head of the PPPL Theory Department from 1992 through 2004, and was the Chief Scientist at PPPL from 1997 until 2009. He also played a prominent national leadership role in the formulation and development of the DoE’s multi-disciplinary program in advanced scientific computing applications, SciDAC (Scientific Discovery through Advanced Computing). For the next two years he will be the PI (Principal Investigator) leading a national multi-disciplinary, multi-institutional team of plasma scientists, computer scientists, and applied mathematicians from 6 national laboratories, 2 private industry companies, and 9 universities to carry out the program definition and planning of DoE’s Fusion Simulation Program (FSP).
In research activities, Dr. Tang is internationally recognized for his leading role in developing the requisite mathematical formalism as well as the associated computational applications dealing with electromagnetic kinetic plasma behavior in complex geometries. He has over 200 publications - with more than 125 peer-reviewed papers in Science, Phys. Rev. Letters, Phys. Fluids/Plasmas, Nuclear Fusion, etc. and an “h-index” or “impact factor” of 42 on the Web of Science, including over 5300 total citations. He has guided the development and application of the most widely recognized codes for realistically simulating complex transport dynamics driven by microturbulence in plasmas and is currently the Principal Investigator of a multi-institutional DoE INCITE Project on “High Resolution Global Simulations of Plasma Microturbulence.” The INCITE (Innovative and Novel Computational Impact on Theory and Experiment) Program promotes cutting-edge research that can only be conducted with state-of-the-art super-computers. Prof. Tang has also been a key contributor to teaching and research training in Princeton University’s Department of Astrophysical Sciences for over 30 years and has supervised numerous successful Ph.D. students, who have gone on to highly productive scientific careers. Examples include recipients of the prestigious Presidential Early Career Award for Scientists and Engineers (PECASE) in 2000 and 2005.