I had the pleasure of spending a week with the folks in the Architecture School learning and playing with their Kuka robot (named Karl, http://www.robotsinarchitecture.org/). This was the first run of a hopefully recurring camp to introduce faculty and staff to the robot arm in the Fab-lab in the A-school. Most of the participants were A-school faculty, but there was also me from the Library, someone from the Music school’s Maker space, and someone from the Bio-engineering school (they just purchased a Kuka KL 3000, a much bigger bot).
Over four days we learned how to make the robot do marvelous and wonderful tricks.
Day 1: Push some clay around.
The object of this day was to learn the basics of the software that runs the Kuka robot.
We picked a tool that we would have the robot push into the clay at a certain number of intervals. The idea being to pour plaster onto the clay to create a permanent design:
This was the most basic tool, just an extension of the arm itself, with different shapes on the end.
Day 2: Cut a foam block
This day we used the robot to cut a foam block with a hot wire. We drew a line in the software that would correspond to a long vertical cut in the block. What we didn’t know at the beginning, was that after the first cut the block is rotated 90 degrees, and the cut is made again.
This produced four long pieces that could be rotated to form a column with a unique profile on each side.
I wrote my name with the line, which produced this:
The focus of this day was to help us think spatially. We could add twists to the curves which led to columns that were hard to mentally visualize how they would turn out. Here is a sample of the columns our group generated:
Day 3: Light writing
We used a little bit more sophisticated tool end this day. It was an LED light that could switch colors. We designed a shape and assigned a color to each line or curve or segment.
I didn’t get very creative on this one, but others had great ideas:
Day 4: 3D extruding
This day was a more demonstrative than hands-on. We went through the basics of how to get the robot to use a 3D pen to print filament in 3D space. Rather than limiting printing to a series of layers on top of each other, the printing can happen in true 3D space.
Finally, we sat together and discussed how robotics can and should impact our fields, and how the robo-camp went in general.
The discussion was light, and basically just touched on the following topics, or simply asked these questions. No deep discussion or trying to answer, just a quick question- and thought-dump session.
- What can robots do that humans can’t, besides going faster and more accurate?
- What can robots do for humanities research? How can they improve and help data visualization?
- What are the components necessary for the robot to function? The robot has motion, but it needs sensory input, data, in order to act on it.
- One of the former students presented an example from their year of working with the robot and noted a frustration that perhaps Libraries can and should be able to help with. This group wanted to use the robot to print cement in true 3D (not like traditional 3D printers that basically print 2D layers, and then print layer upon layer). In order to accomplish this, they had to design and build their own tool. The idea was to have a nozzle that changed shape while extruding the cement, thereby adding another layer of control and design to the printed object. The group was frustrated in the many hours spent in figuring out how to get motors to operate, when there are people out there (and probably students on campus) who know how to do this in their sleep. They spent time developing the tool instead of refining the product. Sometimes creating the tool is a beneficial and important step in creating a product. Other times it detracts from the end goal. Libraries could/should be a great resource for connecting two groups that can benefit from each others collaboration. The Scholars’ Lab can/should be such a hub of networking and connection.
All in all it was a great experience. Mind opening and enlightening.
MathML is a W3C recommendation and an ISO/IEC standard that can be used to write mathematical content on HTML5 web pages, EPUB3 e-books and many other XML-based formats. Although it has been supported by WebKit for a long time, the rendering quality of mathematical formulas was not as high as one would expect and several important MathML features were missing. However, Igalia has recently contributed to WebKit’s MathML implementation and big improvements are already available in Safari Technology Preview 9. We give an overview of these recent changes as well as screenshots of beautiful mathematical formulas rendered by the latest Safari Technology Preview. The MathML demos from which the screenshots were taken are also provided and you are warmly invited to try them yourself.
We continue to rely on Open Font Format features to improve the quality of the mathematical rendering. The default user agent stylesheet will try and find known mathematical fonts but you can always set the
<math> element to pick your favorite font. In the following screenshot, the first formula is rendered with Latin Modern Math while the second one is rendered with Libertinus Math. The last formula is rendered with an obsolete version of the STIX fonts lacking many Open Font Format features. For example, one can see that integral and summation symbols are too small or that their scripts are misplaced. Scheduled for the third quarter of 2016, STIX 2 will hopefully have many issues fixed and hence become usable for WebKit’s MathML rendering.
WebKit now supports the href attribute which can be used to set hyperlinks on any part of a formula. This is useful to provide references (e.g. notations or theorems) as shown in the following example.
Unicode contains Mathematical Alphanumeric Symbols to convey special meaning. WebKit now uses the italic characters from this Unicode block for mathematical variables and the mathvariant attribute can also be used to easily access these special characters. In the example below, you can see italic, fraktur, blackboard bold and script variables.
A big refactoring has been performed on the code handling stretchy operators, large symbols and radicals. As a consequence the rendering quality is now much better and many weird bugs have been fixed.
Mathematical formulas can be integrated inside a paragraph of text (inline math in TeX terminology) or displayed in its own horizontally centered paragraph (display math in TeX terminology). In the latter case, the formula is in
displaystyle and does not have any restrictions on vertical spacing. In the former case, the layout of the mathematical formula is modified a bit to minimize this vertical spacing and to better integrate within the surrounding text. The
displaystyle property can also be set using the corresponding attribute or can change automatically in subformulas (e.g. in fractions or scripts). The screenshot below shows the layout difference according to whether the equation is in displaystyle or not. Note that the displaystyle property should also affect the font-size but WebKit does not support the scriptlevel yet.
OpenType MATH Parameters
- Use of the
AxisHeightparameter to set the vertical position of fractions, tables and symmetric operators.
- Use of layout constants for radicals, scripts and fractions in order to improve math spacing and positioning.
- Use of the italic correction of large operator glyph to set the position of subscripts.
The screenshots below illustrate some of these improvements. For the first one, the use of AxisHeight allows to better align the fraction bars with the plus, minus and equal signs. For the second one, the use of layout constants for scripts as well as the italic correction of the surface integral improves the placement of its subscript.
WebKit already had support for right-to-left mathematical layout used to write Arabic mathematical notations. Although glyph-level mirroring is not supported yet, we added support for right-to-left radicals. This allows to use basic arithmetic notations as shown in the image that follows.
Great changes have happened in WebKit’s MathML support recently and Igalia’s web plaform team is excited about seeing beautiful mathematical formulas in WebKit-based browsers and e-readers! If you have any comments, questions or feedback, you can contact Frédéric at firstname.lastname@example.org or Manuel at @regocas. Detailed bug reports are also welcome to WebKit’s bug tracker. We are still refining the implementation so stay tuned for more awesome news!
Speaking of virtual reality visualization, this Nasdaq roller coaster by Roger Kenny and Ana Asnes Becker for the Wall Street Journal is quite the ride. The underlying data is just the index’s price/earnings ratio over time, but you get to experience the climbs and dips as if you were to ride on top of the time series track.
Weeeeeee, bubble burst.
More of an experiment, this VR map, by the Google Trends Lab in collaboration with Pitch Interactive, shows what people asked about Brexit leading up to the vote. It’s basic data-wise, but you can see potential for more details and get a feel for how virtual reality data visualization might work.
And besides, I’ll accept any excuse these days to bust out the Google Cardboard. Even if it’s basic visually, it’s easy to see how this point of view might bring you closer to the data.
See also the details on what the makers learned from the experiment.
UCL Centre for Digital Humanities are pleased to announce that we now have capacity to offer Reflectance Transformation Imaging and Spectral Imaging services from our Multi-Modal Digitisation Suite research facility based in central London.
Reflectance Transformation Imaging (RTI), also known as Polynomial Texture Mapping (PTM), is a high-resolution, non-invasive and non-destructive imaging technique for documenting fine surface details. Unlike conventional photographs, images created using the RTI capture method can be virtually relit. The direction of the light source can be moved around in real time to give 3D appearance to surface shapes for systematic inspection of fine surface details.
Spectral Imaging is a high-resolution, non-invasive and non-destructive form of computational photography that can disclose features of the object that are invisible to the naked eye in natural light, and can enhance faded writings, reveal palimpsest and under-drawings, as well as aiding in pigments, binders and other materials identification. Spectral imaging helps clarify and support research, scholarly and other goals. The UCL state-of-the-technology spectral imaging system can be applied to documents and manuscripts, polychrome artworks, and a range of archaeological and heritage objects.
The kinds of material we can handle and are suitable for specialist imaging include:
• Documents, manuscripts, maps
• Artworks and other painted objects
• Coins, medals, jewellery
• Other objects bearing fine details such as seals and impressed sealings, cuneiform tablets, as well as inscriptions, carvings, bas-reliefs
• Forensic evidence or any object/surface requiring detailed examination.
For further information, please see our UCLDH Advanced Imaging Consultants page at https://www.ucl.ac.uk/dh/