Bots of Conviction for Archaeologists and Historians

 Uncategorized  Comments Off on Bots of Conviction for Archaeologists and Historians
Apr 232017
 

I was re-reading Mark Sample’s call for bots of conviction, for protest bots, for bots so topical and on-point that they can’t be mistaken for bullshit. Per Sample, such bots should be

  • topical – “They are about the morning news — and the daily horrors that fail to make it into the news.”
  • data-based – “They draw from research, statistics, spreadsheets, databases. Bots have no subconscious, so any imagery they use should be taken literally”
  • cumulative – “The repetition builds on itself, the bot relentlessly riffing on its theme, unyielding and overwhelming, a pile-up of wreckage on our screens.”
  • oppositional – “protest bots take a stand. Society being what it is, this stance will likely be unpopular, perhaps even unnerving”
  • uncanny – “The appearance of that which we had sought to keep hidden.”

The only bot I know of by a historian that meets these criterion is Caleb McDaniel’s Every3Minutes. Lord knows my drawer full of bots does not meet any of those criteria, save perhaps for the ‘uncanny’ in Sample’s sense for @tinyarchae in that it takes the awful social dynamics present on many (most?) archaeological fieldwork and pushes the needed all the way beyond 11.

But my bots are not bots of conviction. They are not very good, truth be told. So I wondered aloud on twitter yesterday what would make for good archaeological or historical bots? I thought maybe

  • a bot tweeting datasets lost to ideological cleansing?
  • A bot pointing out the unprovenanced antiques in museums?

Others chimed in. I’ve gathered their suggestions here in case anyone was looking for inspiration.

So… maybe you might come and make simple bots with me in May, and we can revisit this question:


5 spectra for speculative knowledge design

 Libraries, unfiltered  Comments Off on 5 spectra for speculative knowledge design
Apr 232017
 

[Last weekend, I had the good luck to join an inspiring, interdisciplinary Ecotopian Toolkit gathering hosted by Penn’s Program in Environmental Humanities. (How lucky was I? We also got a sneak peek at the Pig Iron Theatre Company’s stunning symphonic meditation on the Anthropocene, A Period of Animate Existence, which will premiere in Philadelphia later this year.) What follows is a short talk I gave on the last day of the conference. The beginning of it is stuff you may have heard from me before. An augmented, footnoted, slightly more sober version is bound for an edited collection by Martin Eve and Jonathon Gray, on the “past, present, and future of open access.”]

This is a talk about how we might realize our hopes for digital libraries, archives, and museums as socially just and hopeful (maybe even “Ecotopian”) knowledge infrastructure. Three threads from Afrofuturism are woven through all it. They take form of a question and a set of twinned assertions. The geopolitical and environmental inflection-points that have been the subject of this conference demand that we answer the question in the affirmative, and that we actively encode the assertions—these two key Afrofuturist assertions I’ll share—into the very weft and the weave of our digital libraries: from the deep structures in which we store, deliver, protect, and preserve cultural and scientific data; to the ontologies and metadata systems through which we produce information and organize, rationalize, and make it interoperable; to those surface platforms and interfaces for discovery, contemplation, analysis, and storytelling that must be forevermore inextricably algorithmic and humane. What do I mean, humane? I mean predicated on decisions, understandings, and ethical, empathetic engagement with communities understood both locally and (as they say) “at scale.”

So first you’ll get the question from me, and then the assertions. And it’ll be in their light that I want to present five spectra along which I think digital cultural heritage and open science platform-designers must more self-consciously work, if we mean to do our part in the project that has brought us together this week—that is, if we want to contribute basic knowledge infrastructure for toolkits to meet present challenges and far-future, global and interpersonal responsibilities. 

Mark Dery, then styled a “cyberculture” critic, both coined the term Afrofuturism in 1994 and posed the question that remains at its heart—at the heart of the speculative art, music, fiction, poetry, fashion, and design that meet in this rich and longstanding nexus of Black diasporic aesthetics and inquiry. The question is this: “Can a community whose past has been deliberately rubbed out, and whose energies have subsequently been consumed by the search for legible traces of its history, imagine possible futures?” Afrofuturism’s answer has been an unequivocal yes, and that clarity inspires me, particularly in our fraught American context. But, as we know, descendants of the horrors of the transatlantic slave trade are only one of many communities marginalized by archival absence and subject to… well, “library problems”—problems of misrepresentation, thwarted agency, and structural neglect.

My professional community includes stewards of primary sources, research data, and scholarship—and designers of cultural heritage systems meant to serve the broadest cause of social justice and the public good. Our responsibility is therefore twofold: not merely to address that first, daunting task—the provision of “legible traces” of the past through more broadly accessible special collections, archives, and archaeological, environmental, and aggregated genetic datasets. We also need to enable the independent production, by our varied and often marginalized constituencies, of community-driven, future-oriented speculative collections. This means visions for change and social uplift that originate in archival material, yes, but also the introduction of novel ontologies and epistemologies for those libraries and archives: inventive assemblages, recovered cultural structures, and new knowledge representation. Can—for instance—digital knowledge infrastructures challenge Western, progressive notions of time as a forward-moving arrow and a regularly-ticking clock? Can they counter the limiting sense our library and museum interfaces too often give, of archives as incontrovertible evidence—the suggestion, reinforced by design, that the present state of human affairs is the inevitable and singularly logical result of the accumulated data of the past; that our repositories primarily look backward to flat facts, not forward to imaginative, generative, alternate futures or slantwise through branching, looping time? These questions build on the core problem Dery articulated, of whether speculative futures are even possible to generate from obliterated or co-opted pasts.

Now, the assertions. Two of them. The first comes from jazz saxophonist Shabaka Hutchings as a distillation of the message of musician and performer Sun Ra. As Hutchings puts it, “communities that have agency [are] able to form their own philosophical structures”—in other words, they don’t just receive and use information within epistemological bounds defined by those in authority (scholars and teachers, legislators and corporate overlords, librarians and technologists), but instead actively shape knowledge in ways reflected in the very design of storage and delivery mechanisms over which marginalized people typically have little control. This is the deceptively simple idea that the fundamental marker of liberty lies in a people’s ability to build independent knowledge infrastructure. (And in truth, this idea motivates everything I do at the Digital Library Federation, lately.)

The second assertion comes from theorist and artist Kodwo Eshun. Eshun conceives of historical, archival and archaeological sources—including intangible kinds of cultural heritage, such as language and song—as functional and generative, not as static content, there merely to be received, but as active technologies in and of themselves. (This is found all through Eshun’s work, and beautifully demonstrated in a documentary I highly recommend, John Akomfrah’s The Last Angel of History.) To Eshun, the objects of cultural heritage are still-running code and tools that hum with potential. Our historical repositories and even the vaults of the archaeological earth contain active instruments—artifacts waiting to be used, and transformed even as they are played back—just as surely as a scratch artist makes productive dissonance from records on a turntable. So, not for playback and reception—for activating. For use.

Okay. How might Eshun’s technological reframing of that longstanding historiographical concept of a “usable past” combine both with Hutchings’ location of liberation and community agency in the capacity not merely to access information but to create independent philosophical infrastructure, and with Dery’s summation of the speculative, alternate-future goals of Afrofuturism—to become informing principles for the next generation of future-oriented and liberatory digital libraries, museums, and archives?

I dunno. What I do know is that we need more design experimentation to figure that out, and that we might run these experiments along certain fruitful axes or spectra. So, here’s my non-exclusive list. In no case are the ends of any spectrum I will present self-cancelling notions; we may usefully imagine malleable, overlapping systems and oblique slices. The goal of an exercise in digital library design run along these spectra would simply be increased awareness of their relevance to the concerns I started with, and their impact on individuals and communities and ecologies: the possibilities they welcome or foreclose; the dangers they ward against or fail to see; their fundamental generosities and what they hold back.

Enlightenment vs. Afrofuturist Structurings. Library organizational schemes are still largely Enlightenment-era crystallizations of a singular, dominant understanding: the best that a rational society accepts and knows. It is no accident that we appeal to “authority files” in creating metadata and often present information in stemmatic, patrilineal relationships of “inherited properties.” We create it through the little boxes of tabular forms. But new possibilities bring us closer to actualized community agency in digital knowledge infrastructure—alternate naming and finding schemes, practical models for intersectional logic systems, linked open data that melds multiple taxonomies and inheritances—an extension of the content-creating revolution of the Web to meaning-making. This is fundamental liberty that would reach its fullest expression in grassroots, independent and interdependent, broadly accessible, machine-readable philosophical framings—interoperable knowledge infrastructure beholden to no-one. We might invest in such a thing.

However—in an era of derogated scientific and scholarly expertise, climate data denial, rising white supremacy, Breitbart and InfoWars—isn’t it also our responsibility to construct libraries that reflect and prop up structures for truth-seeking that the academy has spent so long evolving and optimizing—namely the forms and methods of our (admittedly problematic) sciences and disciplines? So, what’s the place of the resistant or subaltern premise in digital library design? How do we honor and elevate indigenous knowledge structurally, without simultaneously providing a platform that can be instantly colonized for political disinformation and ideologies of hate?

Historico-evidentiary vs. Speculative Orientation. I also want design experiments that address the basic temporal and evidentiary alignment of our libraries. Present interfaces too often suggest a singular, retrospective or historical orientation toward the material they give access to, and fail to allow community-driven and multiple, speculative, futurist visions to emerge from our collections. So let’s ask: do our digital libraries present their contents as flat fact, or as hypotheses and fodder for interpretation? Do they allow us to look backwards and ahead? Do they adequately indicate gaps and absences and the conditions of their own assemblage, or do they present (as I described before) archives as evidence?

To answer these questions in the form of prototype designs requires us to delve beyond the interface layer in digital knowledge infrastructure, and into the fundamental nature of our archives. Wendy Duff and Verne Harris, in seeking a new basis for archival description, argue against positioning “archives and records within the numbing strictures of record keeping… which posit ‘the record’ as cocooned in a time-bound layering of meaning, and reduce description to the work of capturing and polishing the cocoon.” Instead, they call for “a liberatory [descriptive] standard… posit[ing] the record as always in the process of being made, the record opening out of the future. Such a standard would not seek to affirm the keeping of something already made… [but rather] open-ended making and re-making.”

In considering the orientation of our libraries toward digital objects as evidence, we should also heed Anne Gilliland and Michelle Caswell’s call for increased attention to the “archival imaginary:” those absent (perhaps missing, destroyed, merely theorized or wished-for) documents that traverse aporia and offer “counterbalances and sometimes resistance to dominant legal, bureaucratic, historical and forensic notions of evidence that… fall short in explaining the capacity of records and archives” to move us. Designing for such imaginaries would counter “strands of archival theory and practice [that] maintain an un-reflexive preoccupation with the actual, the instantiated, the accessible and the deployable—that is, with records that have… evidentiary capacity.” How might such “differing imagined trajectories of the future” emerge from records both present in and absent from the past?

Assessment vs. the Incommensurate. Concerns about “archives as evidence” lead us to the hyper-measured condition of the contemporary library. How could things be otherwise? Our digital knowledge platforms are made up of counting machines situated in the neoliberal academy. And indeed, thoughtfully designed and well-supported metrics can help us to refine those systems and suit them better to the people who must inhabit them. Their development is also a necessary, pragmatic response to straightened circumstances. In the face of information abundance, increasing service demands, and limited financial and staffing capacity, assessment measures are instruments through which open access advocates and cultural heritage professionals can make the case for resources and show where they are wisely applied.

Measurement is not going away. The challenge for systems and interface designers is to enable humane and ethical quantification of behaviors and of objects that are by nature deeply ambiguous and even ineffable. These include (of course) users’ complex interactions with information and each other in digital spaces. But we’re also talking about the instantiated cultural data itself: digitized and born-digital objects—records continually remediated as they are delivered or displayed—fundamentally fungible, organic, fluid, and incommensurate, one with another.

Transparency vs. Surveillance. Patron records have long been among the most closely-guarded and assiduously expunged datasets librarians hold. Responsible 21st century digital knowledge design must keep privacy concerns paramount. This is because technologies of sharing and of surveillance are a single, Janus-faced beast. It is up to us to create and fiercely guard mechanisms that protect users’ rights to read, explore, and assemble information unobserved. Our designs must also respect individual and community agency in determining whether historical or contemporary cultural records should be open to access and display in the first place—ideally fostering and encouraging local intellectual control. But here, again,  the contradictory challenge is to build infrastructure that can shield while also opening up. We need our digital library platforms to contribute to watchdog and sunlight initiatives promoting transparency, accountability, and openness in government and corporate archives—while simultaneously upholding cultural and individual rights to privacy and local control.

Local vs. Global Granularities. I see the fundamental paradox of the Anthropocene as our struggle to hold local unpredictability and planetary-scale inevitability simultaneously in mind. Add to that the fact that, somehow, we now must understand humankind as both infinitesimally small and fragile, and as a grim, global prime mover. Can our digital library systems help us to bridge those conceptual gaps? They must, if we want to fashion futures that use both science and empathetic understanding to their fullest extent, integrating big-data processing with small-data interpretation—understanding broad, systemic thinking and local application as part of a unified endeavor, and helping us identify trends even as we tell stories of exceptional experience.

These have been a quick and dirty five among many possible vectors for design thinking that might open 21st century digital knowledge infrastructure to broader community ownership, richer scholarly application and space-time sensitivity, and more creative, speculative ends. I picked these by starting at a place of great respect for Afrofuturist thinking, but other theoretical frameworks and ways of knowing might take us elsewhere. (Many of those have been usefully articulated at the Ecotopian Toolkit conference this week.) May they loop our libraries backward into stories not yet told and forward to every better future we can build.

How our ancient trees connect us to the past

 manuscripts, Medieval  Comments Off on How our ancient trees connect us to the past
Apr 222017
 
Some of the most stunning creations of the Middle Ages are still alive. Britain is dotted with trees planted hundreds of years ago, with over 120,000 listed in the Woodland Trust’s Ancient Tree Inventory. Some of them are over a thousand years old. This year, organisations across the United Kingdom...

Solar System in a bottle

 Data Art, space  Comments Off on Solar System in a bottle
Apr 222017
 

From Little Planet Factory, a Solar System in a bottle made to scale:

A small bottle attempting to maintain the correct scale between the 8 planets of the solar system at a scale of 1:5,000,000,000. Much as in reality the entire bottle is almost entirely dominated by the volume (and mass) of the four gas giants while the four solid planets settle almost dust like in comparison at the bottom of it.

Cute. [via @alykat]

Tags: ,

DPI 2017: Digital Pedagogy Institute 4th Annual Conference

 Uncategorized  Comments Off on DPI 2017: Digital Pedagogy Institute 4th Annual Conference
Apr 222017
 
Logo DPI 2017: Digital Pedagogy Institute 4th Annual Conference
August 16 2017 to August 17 2017
Call for Papers
DPI 2017: Digital Pedagogy Institute 4th Annual Conference The 4th Annual Digital Pedagogy Institute conference will be held this August at Brock University in the beautiful Niagara Peninsula: Dates: Wednesday August 16 – Thursday August 17, 2017 Location: Brock University (St. Catharines, Ontario...
1812 Sir Isaac Brock Way
St Catharines, ON L2S3A1
Canada

Data distributed as clipart

 Government  Comments Off on Data distributed as clipart
Apr 212017
 

Government data isn’t always the easiest to use with computers. Maybe it’s in PDF format. Maybe you have to go through a roundabout interface. Maybe you have to manually request files through an email address that may or may not work. However, this file that OpenElections received might take the cake.

It’s a spreadsheet, but the numbers are clipart.

Did someone enter clipart manually? Why is it clipart instead of numbers in Excel? Who made this file? So many questions, so little data.

Tags: , ,

Digitised Manuscripts hyperlinks Spring 2017

 early modern, Illuminated manuscripts, manuscripts, Medieval  Comments Off on Digitised Manuscripts hyperlinks Spring 2017
Apr 212017
 
From ancient papyri to a manuscript given by the future Queen Elizabeth I to King Henry VIII for New Year's Day, from books written entirely in gold ink to Leonardo da Vinci's notebook, there is a wealth of material on the British Library's Digitised Manuscripts site. At the time of...

California looks green again

 Uncategorized  Comments Off on California looks green again
Apr 212017
 

In case you didn’t hear, California had a bit of a drought problem for the past few years. We complained about not enough rain constantly, and we finally got a lot of it this year. Now we complain that there’s too much rain (because you know, we have to restore balance). On the upside, the state looks a lot greener and less barren these days. David Yanofsky for Quartz has got your satellite imagery right here.

Tags: , ,

A Few Words on Fetching Bytes

 standards  Comments Off on A Few Words on Fetching Bytes
Apr 212017
 

Like all good puzzles, a web browser is composed of many different pieces. Some are all shiny, like your favorite web API. Some are less visible, like HTML parsing and web resource loading.

Even dull pieces require lots of work to standardize their behavior across browsers. For example, HTML parsing originally provided only: Give me HTML and I’ll give you a document. Now, it is much more reliable across browsers because it has been standardized in detail. Similarly, the loading of web resources was somehow consistent up to: give me an HTTP request and I’ll get you a HTTP response. But loading a web resource encompasses much more than that. The Fetch specification thoroughly standardizes those details. As well as specifying how the browser loads resources, the Fetch specification also defines a JavaScript API for loading resources. This API, the Fetch API, is a replacement to XMLHttpRequest, providing an as low-level set of options as possible in the context of a web page. Let’s see how shiny Fetch API might be.

The Fetch API

The Fetch API consists of a single Promise-returning method called fetch. The returned value is a Response object which contains the response headers and body information. Let’s use the Fetch API to retrieve the list of WebKit features:

async function isFetchAPIFeelingGood() {
    let webkitFeaturesURL = "https://svn.webkit.org/repository/webkit/trunk/Source/WebCore/features.json";
    let response = await fetch(webkitFeaturesURL);
    let features = await response.json();
    return features.specification.find((feature) =>
        feature.name == "Fetch API");
}
isFetchAPIFeelingGood().then((value) => alert(!!value ? "Oh yes!" : "not really!"))

You might notice two await uses in the example above. fetch is returning a promise that gets resolved when the response headers are received. The data being requested is JSON. The second promise resolves when the entire response body is available.

fetch can take either a URL or a Request object. The Request object allows access to a whole new set of options compared to XMLHttpRequest. Let’s try again to check whether fetch API is supported in WebKit, but this time, let’s make sure our cache does not serve us some out-of-date information.

async function isFetchAPIFeelingGoodForReal() {
    let webkitFeaturesURL = "https://svn.webkit.org/repository/webkit/trunk/Source/WebCore/features.json";
    let response = await fetch(new Request(webkitFeaturesURL,
        { cache: "no-cache" }
    ));
    let latestFeatures = await response.json();
    return latestFeatures.specification.find((feature) =>
        feature.name == "Fetch API");
}

fetch also provides more flexible access to the response body. In addition to getting it in various flavors (JSON, arrayBuffer, blob, text…), the response provides a ReadableStream body attribute. This makes it possible to process chunks of bytes progressively as they arrive without buffering the whole data, and even aborting the resource load:

async function featureListAsAReader() {
    let webkitFeaturesURL = "https://svn.webkit.org/repository/webkit/trunk/Source/WebCore/features.json";
    let response = await fetch(new Request(webkitFeaturesURL));
    return response.body.getReader();
}

function checkChunk(searched, buffer, count)
{
    var i = 0;
    while (i < buffer.length) {
        if (buffer[i++] == searched.charCodeAt(count)) {
            if (++count == searched.length)
               return count;
        } else if (count) {
            --i;
            count = 0;
        }
    }
    return count;
}

async function isFetchAPIFeelingGoodWhileChunky(reader, count)
{
    reader = reader ? reader : await featureListAsAReader();
    count = count ? count : 0;

    let chunk = await reader.read();
    if (chunk.done)
        return false;

    let searched = "Fetch API";
    count = checkChunk(searched, chunk.value, count);
    if (count == searched.length)
        return true;
    return isFetchAPISupported(reader, count);
}

Fetching The Future

The Fetch API journey is not finished. New proposals might cover important features of XMLHttpRequest that Fetch currently lacks, like early cancellation and timeout. New proposals might also cover HTTP/2 push and priorities as well as wider use of the Response object in web APIs: media elements, Web Assembly… The Fetch algorithm is also being constantly refined to reach full interoperability of web resource loading. A first iteration of WebKit Fetch API implementation shipped in Safari. The WebKit community is eager to hear about your feedback on this feature. Comments, suggestions, priorities, use cases, tests, bug reports and candies are all very welcome through the usual WebKit channels. That would be so fetch indeed!

View post on imgur.com