Over the last few days, a story has been making the rounds about a whale who learned how to “talk.” The caps-lock-named MOC is a beluga whale, captured in 1977 and subsequently conscripted into the Navy’s Marine Mammal Program.
Digital history spans disciplines and can take many forms. Computer technology started to revolutionize the study of history more than three decades ago, and yet genres and formats for recording and presenting history using digital media are not well established and we are only now starting to see large-scale benefits. New modes of publication, new methods for doing research, and new channels of communication are making historical research richer, more relevant, and globally accessible. Many applications of computer-based research and publication are natural extensions of the established techniques for researching and writing history. Others are consciously experimental. This chapter discusses the latest advances in the digital history field and explores how new media technologies are reconfiguring the study of the past.
The new exhibit up at MOMA, "Talk To Me: Design and the Communication Between People and Objects," feels like the multimedia, interactive redaction of my book, Now You See It: How the Brain Science of Attention Can Transform the Way We Live, Work, and Learn. Or, put another way, if I were creative and clever enough to be a media artist, this exhibit--in its totality--covers just about all the points I am trying to cover in Now You See It. MOMA Director Glenn Lowry, in his introduction to the exhibit catalog, calls it "a snapshot in time, recording the civersity and open-endedness of contemporary design." I think he's right, but there's something more here: there is also a process of translation, from analog to digital, including a lot of meta-discourse on what it means to have already made a transition we are just starting to understand.
Its been a while since I posted and it seems I've been knee deep in google! Yes, ok I use it for my email and docs, recently maps (been improving the functionality of the creative maps, awesome btw, post to come) and Youtuube. So without further-a-do... is that how you type that expression?... anyway!
Google is also integrated into my teaching through sharing resources digitally on google docs, to them submitting visual research through blogger. Today a colleague suspected you could use a google (word) doc as a live wiki, I was intirgued and we got the students to input their email address (hence not great quality to read) but we'd not seen how it works.
As you can see from the video you can see how it flickers as with different colour tags which are the different students collaborating on a timeline of research into how we have got to our digital world of today... when zuckerberg invented facebook to when logie baird invented the tele (ok majority are now plasma's as opposed to cathode ray tube).
But to watch it live (the colleges servers and ageing laptops just about survived created a beautiful collaborative document and such a fast, vast resource (ok didint ask to double checkl their sources of data, much in the same respect as a wiki) of info that created such debate that the tutuor could just reflect on to point out the shortening of technological development cycles (there is a better description for it) and key dates etc.
Is there a way to track the amount of individual user input into the document other than colour coding each users text? I can imagine a tree diagram to represent the contrasting majorities of users input. I mean its useful in that you can track which user is doing what and who is adding the most useful data, but can we data mine their input?
Anyhow, I can imagine this technique has been done much already and we're hardly new, but I had to record it as it looked so good their collaboration and I'd love anyfeedback as if it is possible to translate this input technique into stats that can be made into infographics (not for the 'eye candy' novelty, though intriguing to innovate, but to assess, evaluate the students learning. Give Curriculum Leaders/Verifiers a clean-sweep-perception to aid the arbitary quantification of un-easy quantifiable currency of creative understanding. Ok, maybe not as deep as the whole of creativity, but still.
No related posts.
Below, is the current state of our proposed Your Name Here Program. The program itself was up on the Comment Press site for a year garnering excellent feedback . . . so now, dear HASTAC friends, we need your smarts to crowdsource a new name. What do you think? Leave your ideas in the comments section, or tweet it, or Facebook it, or send us an email!
Today’s post contains a Vimeo video from the Long Island Index, which gathered data on the Long Island region. The presentation was designed to create a sense of urgnency and a call to action. Consider the data presented and compare it to your last presentation. This is a great example of effective communication of what could otherwise be extremely dry data. The video is a little over 4 minutes – well worth the time.
Gallery of Video Stills
Can you picture it! Well google are probably well on their way developing it, but I want to share more doodles and ideas on this blog more.
Won't it be brilliant to use this as an app on your phone, or automatically detect a langauge from a sender then automatically translate it to the language you understand in their reciever. It cant be far away from development.
There is voice to text search app from google on my android htc, i'm sure there is text to voice that I hear students playing with on the mac with it. I can appreciate it probably takes a lot of servers to manage with the global population wanting to converse and communicate in their own lanaguage to other businessmen.
If you take the shannon and weaver communication model diagram of 1949 was... the noise in the middle would be the server translating and detecting idioms (uk an example would be: dog and bone, or up north: put wood in 'oil) and dialects.
English Voice to English text - server translate text (like at this site on toolbar) - Japanese Text to Japanese Voice
I admit the text to voice convertor might be limited in its translation of tone of the message from intonation of speech and inflection that comes from the rubato of spoken word. Maybe in time it can measure the pace, the raise in volume, the length of pauses, irony, but for now the nearest we can get to word for word meaning would be excellent.
- Free Translator ~ text translate with voice - GAOHUIJUAN (itunes.apple.com)
- The Slow Race To A Translating Telephone (techcrunch.com)
- Android Apps for Translating Speech to Text (brighthub.com)
- 3 of the Best Tools to Translate Using Google Translate (makeuseof.com)
- Google Translate's Conversation Mode (googlesystem.blogspot.com)
The following short video is a visualization of the network packets of a YouTube video, slowed down 12 times.
Each flying circle represents a network packet. The small green ones are control packets: ACK, SYN, etc. The larger blue ones are data packets. The data is from a real tcpdump of the first 4 seconds of Rick Astley’s music video.