Quantcast
Channel: Kynosarges Weblog
Viewing all 253 articles
Browse latest View live

Peacock Butterfly & More

$
0
0

Spring is here – time to post more pictures of beetles and butterflies! I’ve also linked two Google+ albums from March not yet mentioned on this weblog.

Peacock Butterfly & Goldsmith Beetle — Today I finally caught a beautiful peacock butterfly (downscaled sample below). While feeding honeybees are extremely placid and easy to shoot, butterflies tend to flutter around… well, like butterflies! I saw plenty last year but only got a few drab ones on camera.

The butterfly was so big (easily over 5 cm) that most of the pictures I kept were shot without my usual Marumi DHG Achromat +3 macro filter, just with fully extended 200 mm zoom. I did keep a few macro shots as well, so you can observe the relatively crushed depth of field. And I shot a few pictures of a stately goldsmith beetle (rose chafer) for good measure. These all use the Marumi +3 filter.

Straubing Zoo — From late March, a visit to a small nearby zoo with extremely demotivated animals. I kept the funniest pictures of inmates demonstratively looking away from visitors.

Tiny Ladybug — Hatched in early March, mostly black with four red spots. So tiny that I used the Marumi +3 filter and image cropping!

Butterfly

The usual rules & procedures apply for viewing the pictures…

  • All images are stored as full-size JPEGs in the linked Google+ galleries.
  • You do not need to log into Google or Google+ to view the gallery!
  • Click on each image to see a full-size JPEG with my description, if any.
  • When viewing a single image, click “Photo Details” for basic EXIF data.
  • Use the magnifying glass icon to zoom in, down to individual pixels.

Simulating Platform.runAndWait

$
0
0

Every JavaFX application maintains one single JavaFX application thread running the application’s event queue, much like Swing’s event dispatch thread. System-generated events such as mouse clicks are automatically inserted into the event queue. JavaFX also provides the method Platform.runLater to programmatically enqueue an arbitrary Runnable function object.

You might occasionally wish to call runLater from the JavaFX application thread itself, to ensure some action happens only after the currently executing method has returned to the event dispatcher. However, the more important purpose of runLater is to allow UI access from other threads. Generally, JavaFX UI methods must be called from the application thread or you’ll get an exception. (This is how most GUI systems work – Graham Hamilton explains why.) But the Platform methods are safe to call from any thread, so background worker threads can use runLater to manipulate the UI.

There is one problem with this mechanism. True to its name, runLater always immediately returns to the caller. The specified action is executed at some unknown later time, namely when the event dispatcher gets around to handling the corresponding event. On the JavaFX application thread you can simply execute the action directly if you need to wait for its completion, but from a background thread that’s not an option. Swing provided an invokeAndWait to go along with invokeLater, but JavaFX 8 still offers no Platform.runAndWait. What to do?

Wait for runLater

The OpenJFX source code actually does define a method called Platform.runAndWait. This method executes the specified action directly when called from the JavaFX application thread, and guards runLater with a CountDownLatch to block until completion when called from another thread. However, public access is currently commented out, possibly because there’s no decent solution for error handling. The OpenJFX implementation simply prints stack traces, while my own implementation below just ignores exceptions. You can find the private OpenJFX implementation in class PlatformImpl:

modules/graphics/src/main/java/com/sun/javafx/application/PlatformImpl.java

My simplified version below skips various checks in the OpenJFX implementation. This is not a solution fit for a general library but it’s a convenient hack for situations when bad things are unlikely to happen. In particular, I don’t except JavaFX to suddenly shut down (which might deadlock the await call) nor the specified action to throw any interesting exceptions. I also ignore any InterruptedException while waiting since I assume the application would exit anyway.

/**
 * Runs the specified {@link Runnable} on the
 * JavaFX application thread and waits for completion.
 *
 * @param action the {@link Runnable} to run
 * @throws NullPointerException if {@code action} is {@code null}
 */
public static void runAndWait(Runnable action) {
    if (action == null)
        throw new NullPointerException("action");

    // run synchronously on JavaFX thread
    if (Platform.isFxApplicationThread()) {
        action.run();
        return;
    }

    // queue on JavaFX thread and wait for completion
    final CountDownLatch doneLatch = new CountDownLatch(1);
    Platform.runLater(() -> {
        try {
            action.run();
        } finally {
            doneLatch.countDown();
        }
    });

    try {
        doneLatch.await();
    } catch (InterruptedException e) {
        // ignore exception
    }
}

Semantic Issues

There are two potentially problematic issues that arise from the nature of runLater and the event queue mechanism. First, runLater swallows any exceptions that the executed action might throw, as it’s not designed to communicate anything back to the caller. If you want error handling you need to specialize runAndWait for a Runnable subclass that provides such communication, e.g. FutureTask, or else implement your own custom scheme.

The second issue is more subtle. Although runAndWait waits for completion of the runLater call, that call still inserts the specified action at the end of the event queue. Any events that are already present will be processed first, and only then will the new action begin executing. However, when calling runAndWait from the JavaFX application thread the action is executed immediately, regardless of any present event queue contents.

Could you simply remove the entire isFxApplicationThread branch to do a blocking wait from the JavaFX application thread? No, you couldn’t. The blocking wait would block the application thread itself, preventing any further event queue processing and deadlocking the application. When calling from the JavaFX application thread, you must decide whether to run the action directly and implicitly wait for completion, or else enqueue with runLater and forego waiting for completion. Only background threads can both enqueue and wait.

ListView Text Alignment

$
0
0

The ListView class of JavaFX 8 shows one item per line – one String in the simplest case. What if you want to visually separate individual string fragments (words, numbers)? You could use a TableView with multiple columns, but that may not be appropriate for your data. Or you could insert tab characters ("\t") into each string, but tab spacing is not configurable and rather unpredictable.

A better way to align individual parts of a single text column is to implement your own ListCell class. Assume a cell data type of ItemData with three properties left, middle, right that should appear at the corresponding positions within each column, with the right part also right-aligned. Here’s what the ListCell implementation would look like:

class CustomListCell extends ListCell<ItemData> {

    private Font _itemFont = ...;

    @Override
    protected void updateItem(ItemData item, boolean empty) {
        super.updateItem(item, empty);

        Pane pane = null;
        if (!empty) {
            pane = new Pane();

            // left-aligned text at position 0em
            final Text leftText = new Text(item.left());
            leftText.setFont(_itemFont);
            leftText.setTextOrigin(VPos.TOP);
            leftText.relocate(0, 0);

            // left-aligned text at position 4em 
            final Text middleText = new Text(item.middle());
            middleText.setFont(_itemFont);
            middleText.setTextOrigin(VPos.TOP);
            final double em = middleText.getLayoutBounds().getHeight();
            middleText.relocate(4 * em, 0);

            // right-aligned text at position 8em
            final Text rightText = new Text(item.right());
            rightText.setFont(_itemFont);
            rightText.setTextOrigin(VPos.TOP);
            final double width = rightText.getLayoutBounds().getWidth();
            rightText.relocate(8 * em - width, 0);

            pane.getChildren().addAll(leftText, middleText, rightText);
        }

        setText("");
        setGraphic(pane);
    }
}

The sample assumes you want to specify a custom text font, here stored in the field _itemFont. We must set this font for each Text object before we start measuring and positioning the object. That’s especially important if you rely on CSS styling for fonts. Such styling is applied after the layout calculations in updateItem have been performed, so you’ll get the correct font but wrong alignments if you don’t also set the font in code before taking metrics.

Even though we want to show plain text, we need to use setGraphic to enable precise alignment. We put all Text objects in a basic Pane that allows absolute positioning. The first Text is simply left-aligned at (0, 0). The second Text is also left-aligned but should appear at a “column” position of 4em, based on our custom font, so that’s what we specify in relocate. And the third Text should be right-aligned, so we measure its width and subtract it from the desired “column” position of 8em.

Note that you must call both setText and setGraphic in every updateItem call, even if just to clear them. That’s because ListView optimizes item display by reusing existing ListCell instances. If you don’t set both display properties, the current instance continues to show the previous item’s values!

All that remains is enabling the new ListCell styling, and that’s easy with the new lambda syntax: myListView.setCellFactory(t -> new CustomListCell()); If you only have single-line items as in the code sample, you might also wish to call setFixedCellSize with e.g. 1.1em of your item font. And that’s how you align text fragments in a ListView item.

Syntax Highlighter (MT)

$
0
0

Lengthy code snippets wrapped in standard <pre> tags can be rather hard to read. WordPress.com has a built-in syntax highlighter but the standard WordPress.org installation does not, and neither has Jetpack. Fortunately the WordPress.com feature is based on a freely available JS/CSS library, Syntax Highlighter by Alex Gorbatchev.

You can directly add this library to any web page under your control, or use a WordPress.org plugin for self-hosted WordPress blogs like this one. I’m using Syntax Highlighter MT by Megatome, a simple and transparent solution – just add a class to your <pre> tags. The official WordPress plugin defines new square-bracket shortcodes for each language which isn’t something I want or need.

Library Usage

Gorbatchev’s library defines a straightforward way to style code blocks which is also directly exposed by the Megatome plugin. As shown on the library’s installation page, simply add class="brush:java" to any <pre> tag you wish to style. Replace “java” with any of the numerous predefined brushes for other languages. The beauty of this solution is that you’ll still get a standard <pre> block on RSS readers or other displays that ignore JavaScript and/or CSS.

Aside from the language brush, coloring is determined by one of seven themes. You can also define your own themes but the default ones are quite pleasant. I’m currently using Eclipse which has a white background like Default, but whose decoration is an unobtrusive gray rather neon green.

Syntax Highlighter is clever enough to provide the original source code when copying & pasting out of the styled <pre> block, so your readers won’t get all the <span> tags used for styling. All told this is an excellent library with just one problem: the explicitly set font size is too big.

Font Size Fix

All supplied Syntax Highlighter themes reset the <pre> font size to 1em which is too big for my WordPress theme, or indeed for most layouts. Monospace fonts are usually set somewhat smaller than body text fonts – I’m using 0.875 (root) em on this blog. The effect would be more pronounced on layouts that shrink the body text font relative to the root em size. I suspect that’s what happens on those websites you may have seen whose code font looks impossibly big relative to the surrounding text.

I’m not sure why Gorbatchev sets the font size in the first place, but the fix is easy enough. Find the preinstalled theme files in themes/shCore*.css and delete all font-size lines from all of them, except for the 10px size used for the little question mark toolbar that shows Gorbatchev’s copyright notice. Now Syntax Highlighter will use whatever <pre> font size you’ve set in your main stylesheet.

Busy Red Wood Ants

$
0
0

A colony of red wood ants has discovered a nearby tree stump and is busy building its nest there. I snapped a series of pictures as ants were dragging tasty food (pictured below) and building materials into the nests. Bonus pictures include a cardinal beetle and a click beetle.

For this excursion I switched from my usual Marumi DHG Achromat +3 macro filter to the +5 version, and immediately regretted it. Even though the ants were small enough to warrant the +5 filter, hand-held focusing is too difficult and the depth of field is too shallow. Next time I’ll return to the +3 filter, it’s quite enough in conjunction with the 200 mm zoom lens.

Red Wood Ants

The usual rules & procedures apply for viewing the pictures…

  • All images are stored as full-size JPEGs in this Google+ gallery.
  • You do not need to log into Google or Google+ to view the gallery!
  • Click on each image to see a full-size JPEG with my description, if any.
  • When viewing a single image, click “Photo Details” for basic EXIF data.
  • Use the magnifying glass icon to zoom in, down to individual pixels.

Robot Writer News

$
0
0

Computers still struggle to master the Turing test, but that doesn’t matter for low demands on writing quality. In today’s sampling of publishing news we learn that this includes social networks, professional journalism, and scientific conference proceedings.

The Scientific Bot

In February, science publishers Springer and IEEE were forced to retract over 120 papers from their subscription services. They were all computer-generated gibberish “composed by a piece of software called SCIgen, which randomly combines strings of words to produce fake computer-science papers.” Embarrassingly, neither the publishers nor their readers (were there any?) noticed the fakes until they were tipped off by Cyril Labbé of Joseph Fourier University in Grenoble, France, who had written his own SCIgen detection software.

This is another blow against paywall science publishing in its losing battle with open access services. After all, diligent editing is supposed to be the primary value of expensive subscriptions. In April, Springer reacted by promising “intensified” editorial processes… and an automatic SCIgen detection system! Evidently they don’t really trust their newly “intensified” editing. Or perhaps they follow the same procedure as IEEE’s (cited in a comment): “many” conference papers have “peer-review procedures.” In other words, some unknown number of papers in paid subscriptions are not reviewed at all.

The Socializing Bot

Bots for sale on Twitter and other social networks have long been a booming business, and it just keeps getting bigger. Nick Bilton reports that bots are now employed en masse to influence political opinions in Mexico, Syria, and Turkey where

[…] an investigation found that every political party was controlling bots that were trying to force topics to become trends on social sites that favored one political ideal over another. The bots would also use a political group’s slogan as a hashtag, with the intent of fooling people into believing it was more popular than it really was.

Bots keep evolving to stay ahead of the networks’ spam filters. They use real-sounding names, simulate human wake-sleep cycles, and recycle bits of conversation from real users. Thousands of fake followers cost mere dollars. It seems the only solution would be to abandon online anonymity altogether and tie accounts to some real-life personal identity. More likely human users will further retreat behind privacy guards while the public view is dominated by the steady background noise of bot traffic.

The Journalistic Bot

The sad residues of professional journalism still trying to survive on the Internet are now adding “Around the Web” sections to their pages. These contain article links that look like editorial recommendations for related content, but are in fact paid links generated by bots. The quality of this quasi-advertising is generally pathetic, with titillating headlines leading to weaponized clickbait. The good news is that users seem to be learning to avoid these latest cesspools. The main articles are usually better but no longer necessarily written by humans, either.

In February, the Los Angeles Times made headlines with its robot-written earthquake report. The LA Times uses research bots extensively, although human writers still assemble the final article. Given the stereotypical nature of most news reporting this won’t last, and a company called Narrative Science is already producing purely synthetic journalism. Its software writes many thousands of articles in fields such as sports and finances, for a variety of Internet outlets including Forbes.

Readers can’t tell the difference – not so surprising as the typical baseball or earnings report is a predictable set of phrases decorating a computer-friendly set of numbers. Narrative Science also integrates the necessary domain expertise to add flavor and avoid errors. Co-founder Kristian Hammond claims that in 20 years, “there will be no area in which Narrative Science doesn’t write stories.” Maybe they could write better fake conference papers, too.

Light Field Photography

$
0
0

The American company Lytro makes cameras based on light field photography. The first version, a tiny box, made headlines but didn’t have much impact. Now Lytro tries again with the Lytro Illum, an improved model about the size and shape of a typical “prosumer” camera. One defining feature of light field photography is the ability to refocus an existing picture, and you can try that for yourself on Lytro’s Living Pictures page: click a picture, then anywhere within the picture to bring that part into focus.

While the Illum announcement was once again widely publicized, the articles I’ve seen were rather vague on the underlying technology, as is the (rather messy) Lytro website itself. The best way to learn about the technical and scientific details is Lytro founder Ren Ng’s 2006 dissertation, Digital Light Field Photography (PDF hosted by Lytro). Thankfully, Ng offers easy-to-understand and generously illustrated overview sections along with the obligatory mathematical formulae. So if you have a scientific (and photographical) bend of mind you’ll want to read the dissertation yourself. The technology is both fascinating and splendidly explained by Ng, so it’s well worth your time.

In the rest of this post, I’ll try to highlight the most important aspects of the dissertation from a practical viewpoint. Note that I’m neither a mathematician nor an expert in optics, so I’m depending on Ng’s simplified explanations myself!

Recording Light Fields

I grabbed two of Ng’s plentiful illustrations to briefly present the basics of light field photography (LFP). The first image (Figure 2.3 in the dissertation) shows a cross-section of a conventional camera. Imagine this slice of camera as recording a one-dimensional image, with x as the axis of pixel coordinates. Since every pixel gathers all rays projected from the same point of origin in the focal plane, assuming the lens is precisely focused, we can introduce a second axis u that records the point at which each ray for a given pixel x intersected the lens.

Normal Camera

Normal Camera Recording

For a normal camera this second dimension is meaningless, as each output pixel simply sums the light from all input rays. However, a “plenoptic” camera that is capable of recording a light field divides its sensor into an array of microlenses, each gathering all light coming from the main lens and projecting it onto a small group of actual sensor pixels. The second picture (Figure 3.1 in the dissertation) visualizes the effect for the cross-section of a plenoptic camera, again producing a one-dimensional image with x as the axis of output pixels. Now the u axis is itself divided into sections, each corresponding to a microlens. That is, for each output pixel x the plenoptic camera distinguishes various contributing input rays along u – directional information that is lost in a normal camera.

Plenoptic Camera

Plenoptic Camera Recording

To obtain a conventional photography from a plenoptic camera, we can simply sum all vertical input rays u for each pixel x, which is exactly what a normal camera does implicitly. But the additional directional information enables us to perform different sums. As it turns out (please read the dissertation for the gory mathematical details), tilting the vertical “column” that defines an output pixel corresponds to changing its focus, and more complex deformations correspond to lens aberrations. Summing the microlens “cell” data from different adjacent columns, with some appropriate weighting, therefore enables algorithmic focus shifting and correction of lens aberrations.

Photographic Benefits

So now we have a light field recording. What are the benefits compared to a conventional picture?

Focus Correction — This was the original motivation for Ng’s research. Unfortunately, it turned out to be the least important use of LFP by the time Lytro actually commenced production. Today, the amateur photographers most likely to struggle with focus problems simply use smartphone cameras with effectively infinite depth of field, as noted below.

Interactive Focus Shifting — Lytro’s sample images nicely demonstrate this effect. The big disadvantage is the requirement for an appropriate interactive display. Moreover, artistic photographers will want to permanently fix the focal plane at their discretion. Amateurs once again won’t care and will just use their infinitely focused smartphone pictures.

Aberration Correction — The aberrations of current lenses are already handled by software (in or out of camera), but LFP’s great flexibility could enable simpler and cheaper lens architectures. Moreover, it can correct for aberrations not induced by lenses, e.g. atmospheric turbulence in telescopy.

Greater Aperture — The Lytro Illum uses a remarkable f/2 aperture throughout its entire 30-250 mm zoom range. That’s possible because LFP compensates for the shallower depth of field and increased aberrations normally incuded by greater apertures. Consequently, cheaper lenses can be used with less light, longer zoom, and/or faster shutter times.

Holographic Recording — Since LFP knows each pixel’s focal plane, it can derive its spatial distance from the lens and so construct a holographic relief. This neat feature drives industrial and scientific LFP use, but remains irrelevant to consumers until we have convenient holographic displays to match.

Macro Photography — The magnification required for macro photography immensely compresses the depth of field. Obtaining a correct focus is tricky, and capturing the entire object may require digitally combining multiple exposures with different focal points. LFP could at least greatly simplify macro photography, if not entirely obsolete the laborious multi-exposure procedure.

Inherent Limitations

When I first heard about LFP I was quite enthusiastic and thought it might obsolete conventional lenses entirely. Sadly, this turned out not to be the case. LFP has two important limitations that qualify the benefits listed above. These are discussed in chapter 3 of the dissertation.

Lower Resolution — To obtain directional information, multiple pixel elements on the image sensor must be grouped under one microlens, effectively forming a single output pixel. Consequently, the sensor’s effective output resolution shrinks with the size of each microlens pixel group. Increasing the latter improves refocusing power but reduces maximum image quality. Eventually the image is “focused” everywhere thanks to high directional resolution (many pixels per microlens), but that “focus” is actually rather blurry due to low spatial resolution (few microlenses).

Ng notes that image sensors can theoretically achieve a higher resolution than anyone knows what to do with, so this tradeoff shouldn’t matter in the long run. However, smaller sensor elements reduce light gathering power and increase errors, up to a limit imposed by diffraction. The Lytro Illum offers only 5 MP effective spatial resolution which is slightly laughable for a big $1,500 camera (my Sony NEX-7 has 24 MP). Unfortunately, an LFP sensor that reaches good resolution along both spatial and directional axes is bound to be rather gigantic.

Conventional Casing — Ng cites previous work that dashed my hopes for doing away with conventional lenses and camera casings. Arrays of microlenses without a main lens have already been constructed by various research groups, but cannot match the image quality of similar arrays mounted behind a conventional main lens. The reason is once again diffraction, and thus a fundamental physical limitation. It seems existing smartphone cameras already offer the best feasible compromise of size and image quality. LFP is not an alternative for minimizing size and weight.

(Lack of) Market Success

Why hasn’t Lytro’s first model caught on? The obvious answer is that it was overtaken and obsoleted by smartphone cameras. These have virtually unlimited depth of field thanks to their tiny focal length (explanation), meaning they never have trouble focusing on anything. Since modern smartphone cameras also offer good resolution and even somewhat decent low-light performance (aided by post-processing), and since casual photographers don’t normally care about aesthetic bokeh, a consumer-grade light field camera was irrelevant right out of the gate.

So will the new Lytro Illum fare better? Repositioning for the (semi-) professional market was the only option available to Lytro, but unfortunately this market emphasizes maximum resolution and sharpness where the light field technology is forced to compromise. On the other hand, finding the perfect lens for the perfect aperture and depth of field is part of the art and sport of photography, so to speak, and it’s not clear whether the audience for a $1,500 camera would even accept aid in this regard.

Unless light field sensors can match the perceptible quality of conventional sensors so closely that big camera makers are willing to make the switch, I don’t think the technology as it stands has much of a future outside of industrial applications. However, there should be a niche for macro photography which is LFP’s one stand-out ability that’s both highly useful and hard to replicate conventionally. We might see a few good specialized cameras for that purpose.

Star Chess: The Next Generation

$
0
0

At long last I’ve finished updating Star Chess to Java and JavaFX. This little space empire builder was the founding project of the Kynosarges website. The original Fortran 90 version (yes, really) was the first page I published back in 1999. The first total rewrite in plain C for Windows followed in 2001. Then the project languished while I moved on to other things.

The arrival of JavaFX was the perfect opportunity to give Star Chess another facelift. The first screenshot shows how the game looks now, with its new JavaFX interface and lots of custom assets – NASA photographs and embedded fonts for text and icons. The second screenshot shows how the game looked in 2001. Admittedly, the binary package is now also about ten times the size…

Star Chess 2014

Star Chess 2014

Star Chess 2001

Star Chess 2001

Porting the core algorithms was straightforward, but I had to completely restructure the user interface. The C version was written directly on the Win32 API, with no GUI framework of any sort, so I merrily mixed interface and engine code all over the place. There was no thread affinity, so my Windows message handlers just decided ad hoc which thread might have called them. And of course there were no classes or objects in plain C, either. In retrospect it’s quite amazing the program ever worked at all!

This time the interface is completely isolated from the game engine, ensured by a separation into two different NetBeans projects. The “core” project housing the engine and computer player algorithms conforms to the new Compact1 profile in Java 8, so it should work on the most minimal Java platforms. This project does include an equally minimal UI, namely a console runner I use for testing. The next screenshot shows off its beautiful ASCII art…

Star Chess Console

Console Runner

The original motivation for writing Star Chess was to experiment with chess-like turn prediction algorithms in an empire building game. These algorithms are essentially unchanged from the original release, except that the source code is much more readable. I updated the separately available computer player documentation (PDF) which compares the complexity of chess to empire builders, and examines strategies to reduce that complexity and extend the turn prediction horizon of computer players.

The binary and source packages are available on the Star Chess home page as usual. See the ReadMe file for more information, and let me know if anything doesn’t work as expected. Once again, Windows users should note that the Client VM which Oracle inflicts on them by default is slow and obsolete. Star Chess turn prediction gets a 50% speed boost from the Server VM, and so provides another fine sample case for my article on Java Client VM Performance.


∞ Managed Code & Silly Science

$
0
0

Time for another roundup of worthy links from the last couple of months. First, if you’re at all interested in historical wargames, you should visit Wargame_[space] where veteran strategy game writer and actual brain surgeon Bruce Geryk now collects his articles.

Quirks Mode’s Peter Paul Koch has published three slide collections from the HTML5 developer conference, on viewports, touch events, and the mobile web in general. The viewport presentation in particular is highly recommended for all web developers.

Two interesting developments in attaching native backends to bytecode frontends. WebKit now includes a fourth optimization tier based on LLVM, normally used with statically compiled languages such as C++. Filip Pizlo also gives a very detailed walkthrough of WebKit’s overall JavaScript architecture.

Moreover, Unity has announced a switch of its .NET runtime from Mono to IL2CPP. That is, .NET assemblies are statically compiled to native code, and execute in a native environment that provides equivalent services to the .NET virtual machine, such as garbage collection and reflection.

We also have two mathematical treats. MAFFIA stands for Mathematicians Against Fraudulent Financial and Investment Advice, dedicated to exposing the cherry-picked nonsense math of “technical analysis.” And Tyler Vigen has made a fine gallery of Spurious Correlations. Did you know that the marriage rate in Maine correlates with deaths caused by amputations of limbs?

Finally, Venkatesh Rao has published Science! and Other Off-the-Wall Études, including such gems as the definition of Science! which is so awesome that I’ll quote it in full:

When “science” as description of a natural cognitive process turns into “Science!” as an identity-anchoring pattern of codified conspicuous production by the insecure, you get a cargo-cult procedural prescription. Unlike science, Science! is practiced by those whose social awkwardness is caused by actual social incompetence rather than the indifference and neglect that comes from being absorbed on another front. People who need procedural scaffolding because they never actually cultivated the necessary natural instincts. So natural behaviors like attentiveness and trial-and-error turn into Observation! and Experimentation! Seeking clarity by corralling doubt turns into theatrical Hypothesis! and systematic thought turns into Rigor!, Systems! and Methods! The result is an elaborate theater of mostly useless production where citations and awards become measures rather than consequences of value and curiosity turns into an expensive optional extra element in the business of grant-writing. Over the last fifty years, social support for science has gradually been diverted towards Science! which is why today, we’re paying far too much for whatever science actually emerges. I am neither scientist nor Scientist! but one big payoff for me, from having been accidentally put through the Scientist! production machine, is that I can tell the two apart fairly well now. It is a skill that is very likely to get you thrown out of the Scientific! establishment.

(Updates to Developer Articles, Science Links, and Subscriptions)

Quarriors’ Quirky Dice

$
0
0

Icarus Studios’ Quarriors! for iPad was released in December 2013 to mixed reactions. The app was certainly pretty but the interface was a confusing mess of taps and swipes, with too many pointless pauses and not enough information for those new to the game. On top of that, the AI players were pretty terrible and seemed to deliberately ruin their own sets of dice.

Many updates and one expansion later, I’m happy to report that most issues have been fixed. Granted, I can no longer judge how accessible the game is to complete newbies, but the help screen now includes a link to the full PDF manual (see below) which should explain everything you’d want to know. The gesture interface was overhauled, and various decorative embellishments can be disabled to speed up the game. Last not least, the “Hard AI” level beat me three times in a row after the latest patch, which had never happened before, and generally acted in a credibly competent manner.

I held off on my review because I waited for these fixes. Now that they have arrived, I can recommend Quarriors and its expansion to everyone looking for a quick fix of simple dice/card strategy with a generous dose of luck. Note that I’ve only played the AI in this game, although multiplayer is supported as well. The very short duration of a full match doesn’t seem worth setting up a GameCenter session for, and to my knowledge Quarriors still lacks turn replay for asynchronous games. Nevertheless, I think the unusual dice mechanics are worth a closer look for anyone interested in board game design.

Quarriors Game Design

The Quarriors! board game was designed by Mike Elliott & Eric Lang and published by Wizkids Games. The iPad version is a literal adapation (with animated dice rolling across the screen…) that also includes the Quarmageddon! expansion, easily summarized as “more of everything.” Quarmageddon’s Rulebook (PDF) is linked both from the expansion’s Wizkids page and from the iOS app’s help page. This is the only manual you need, as it includes the full base game rules with a few additions for the expansion.

Quarriors consists of each player building a set of dice afresh in each game, similar to deck building in the card game Ascension. New dice are acquired from a public central area, the “wilds,” in exchange for the currency produced by a player’s current throw of dice. Since everything here must start with Q, the currency is called “quiddity” and the dice in the wild are called “quarry.” Yes, really.

Quarriors!

Click on image to show full-size version in new window

But most dice also have effects other than producing quiddity, and that’s where things get interesting. Generally, 1–3 sides of each six-sided die only provide quiddity while the remaining sides represent the die’s special abilities, often in different strengths or other variations. So when you pull a Gnome Barbarian die out of your dice bag and roll it, you might just get 1–2 quiddity (the first two sides); or a Level 1 Gnome Barbarian with Attack 1, Defense 3, and Immunity (the second two sides); or a Level 2 Gnome Barbarian with Attack 2 and Defense 4 but no Immunity (the last two sides).

There’s more. You cannot actually use a creature or spell whose side comes up until you “summon” it with quiddity equal to its level, reducing the amount available for capturing from the wilds. So why would you want to summon anything? Because only creatures that have been summoned and survived for a whole round earn you “glory,” the game’s victory points, and also let you cull dice from your bag. Once summoned, your current set of creatures automatically attacks those of the other players, so as to reduce the glory they will earn this round. Defeated creatures immediately go back into their player’s bag of dice without contributing glory or allowing culling.

Since creatures and spells do nothing unless summoned, culling your bag down to only high-level dice is inadvisable. Without some quiddity-only dice, you might throw lots of powerful creatures you can’t summon! While highly random, I find these game mechanics quite intriguing. Quarriors dice effectively combine multiple cards in Ascension or Magic: The Gathering, allowing the compression of surprisingly varied gameplay into a very small number of elements (dice) and turns. A refreshing change from the typical use of dice as mere low-tech random number generators.

Glyph Positioning in JFreeSVG & OrsonPDF

$
0
0

Object Refinery Ltd. makes two small but extremely useful Java libraries for producing vector graphics. JFreeSVG (free & commercial) supports Scalable Vector Graphics (SVG) and HTML5 Canvas output, and OrsonPDF (commercial with free demo) supports Adobe PDF output. Each provides a custom implementation of java.awt.Graphics2D, the 2D drawing surface for the Java AWT graphics library.

Both libraries offer an attractive and unusual option for drawing text. In addition to a handful of built-in fonts (PDF) or whatever fonts happen to be present on the client (SVG), they can render text as vector graphics using the line and curve primitives provided by the output format. This considerably bloats the resulting file so you wouldn’t want to use it for long documents, but it’s perfect for short bits that require specific fonts – individual phrases, equations, or diagrams. (Adobe PDF also supports font embedding but OrsonPDF 1.6 does not, so vector drawing is your only option. This does have the benefit of avoiding the licensing issues endemic with font embedding, though.)

Moreover, both libraries always use vector glyphs when drawing TextLayout objects. If you want the measuring features or text attributes provided by that class, you’ll get no other text rendering option. So I happily created vector-rendered text for a UML diagrammer application, but I soon noticed a problem: the glyph positioning was often incorrect. The following picture, created in Adobe Acrobat from a sample PDF as PNG at 1200 DPI, shows a few lines in Arial that demonstrate the problem:

AWT Glyph Spacing

The left side is the original output, drawn using one TextLayout instance per line in a straightforward way. The right side is the corrected output that I eventually figured out, as explained below. SVG is visually indistinguishable from the showcased PDF on both sides, so I didn’t reproduce it here. Just in case you’re not seeing the defects clearly, the most visible ones are the following:

  • “kynosarges.starchess” — “yn” is too close while most other glyphs are spaced too widely and inconsistently, especially “es.” Such completely wretched spacing is peculiar to boldface.
  • “VIEW_INTERNAL” — “NT” and “NA” are almost touching whereas “TE” is too wide.
  • “maxPositions” — “ma” is too close, “xP” are almost touching.
  • “totalNanoTime” — “Na” and “me” are too close, “Ti” is too wide.

The exact defects vary between characters and fonts. I used Arial here to show that the issue is not peculiar to any “fancy” fonts for which vector rendering would be more important, and also to enable as many readers as possible to reproduce the sample directly. Rest assured that some defects cropped up regardless of the font I used!

Cause & Solution

As I eventually discovered, the root cause is the resolution that TextLayout.draw uses for relative glyph positioning. With a Font­Render­Context obtained from the current Graphics2D object, that resolution is identical to that object’s drawing resolution. And the drawing resolution is left at its output-dependent default by both JFreeSVG and OrsonPDF:

  • SVG — one HTML pixel (px) which equals 1/96″, give or take browser adjustment.
  • PDF — one typographical point (pt) which equals 1/72″, give or take reader adjustment.

For traditional bitmap font rendering, the output resolution defines a pixel grid which constrains glyph positions as well as their shapes. Low-resolution displays and bitmap images require careful adjustments to produce readable text, e.g. ensuring that a lower-case “i” which is just one pixel wide also has one pixel of whitespace to its left and right. But for high-resolution vector rendering with floating-point coordinates, the low-resolution grid of integral layout coordinates is quite meaningless. Using it for glyph positioning is wholly inappropriate and leads to the defects shown above.

Getting properly spaced glyphs when the positioning algorithm insists on using integral coordinates therefore requires scaling up the output resolution for text rendering. Fortunately Graphics2D accepts the required scaling transformation, although its use is somewhat cumbersome. First you need to apply an AffineTransform with a scaling factor of less than one. I’m using a floating-point variable scale holding the inverse factor of one or greater, resulting in this scaling call:

graphics.setTransform(AffineTransform.getScaleInstance(1/scale, 1/scale));

Then, of course, all font sizes and drawing coordinates must be explicitly multiplied by scale to correct for this transformation. You can download a small ZIP package with the complete source code and sample output in PDF and SVG format. The two sides of the sample output were obtained at a scale of 1 and 20, respectively. This works out to a DPI resolution of 72 and 1440 DPI (PDF) or 96 and 1920 DPI (SVG). Scaling factor 20 produces a very respectable print-like resolution, ensuring that glyph positioning is visually correct on all conceivable displays and printers.

Failed Alternatives

The first and best alternative solution would be to set the Graphics2D rendering hint that allows fractional metrics when positioning individual character glyphs. If positions were calculated in fractional coordinates, the nominal resolution of the positioning grid wouldn’t matter. Unfortunately, neither JFreeSVG 1.9 nor OrsonPDF 1.6 seem to support this hint. Adding such support would be highly desirable (and render this entire article obsolete).

Another alternative I attempted was to create a custom Font­Render­Context with the desired resolution, i.e. scaled by 20 compared to the original Graphics2D surface. That worked fine… until I also enabled kerning. Java AWT, or at least TextLayout, evidently use both resolutions specified by Graphics2D and Font­Render­Context for kerning, so kerning pairs were wildly out of place compared to the remaining text when these resolutions differed. This may be an AWT bug.

Bonus Tip: Kerning & Ligatures

Speaking of kerning, I had some trouble enabling this feature, as well as ligatures. Both are supported as TextAttributes and so can be added to an AttributedString – but had no effect when I did so. Turns out that while the Java API documentation does not list them among the “primary” attributes that are replaced by explicit font selection, this is in fact precisely what happens. So when you use the FONT attribute to select a precreated font into your Attributed­String, you must enable kerning and ligatures on the font itself in order to have any effect. This requires calling the method deriveFont(Map), as demonstrated in Oracle’s AttributedText tutorial. My own sample code also uses this technique.

∞ Balls of Mud & Java Optimization

$
0
0

Here’s another link roundup, this time exclusively about software development…

Brian Foote and Joseph Yoder’s 1999 paper Big Ball of Mud, also available for download in various formats, posits the likely inevitability of this infamous design anti-pattern and is still well worth reading. Foote’s 2012 presentation Who Ever Said Programs Were Supposed to be Pretty? reiterates the argument more compactly. Evidently little has changed in the meantime!

Aleksey Shipilёv has published several in-depth articles on Java/JVM optimization. The Exceptional Performance of Lil’ Exception compares the performance cost of exceptions to normal branching, Nanotrusting the Nanotime measures the surprisingly poor precision and scalability of the popular benchmarking method System.nanoTime, and Java vs. Scala: Divided We Fail tracks the curious case of a tail-recursive Scala function that runs twice as slow when correctly annotated with @tailrec.

Lastly, The Flaw Lurking In Every Deep Neural Net highlights a surprising property of neural networks described by Szegedy et al. (PDF): Every deep neural network has “blind spots” in the sense that there are inputs that are very close to correctly classified examples that are misclassified. These flaws are known and rare enough to be irrelevant in practice, but it’s a fascinating demonstration how artifical “neural networks” differ from the real neural network in our brains.

(Updated Developer Links and Java Links.)

Custom KOMA-Script Letter 1.1

$
0
0

Over the last couple of KOMA-Script releases, I noticed that my custom letter format for this style package had changed its appearance. Specifically, the second page’s header and footer had moved from the margins into the page, taking up far too much space. I opened a (German) bug report on the KOMA-Script forum, including sample LaTeX and PDF documents.

Happily, I quickly got a reply from user Elke with the correct solution. As it turns out, the new behavior is intentional and explained in the KOMA-Script guide. Here’s the relevant excerpt at the bottom of page 39 in scrguien.pdf (the equivalent passage in the German guide scrguide.pdf is on page 44).

The decision is easy when text and header or footer are separated from the text body by a line. This will give a “closed” appearance and header or footer become part of the text body. Remember: It is irrelevant that the line improves the optical separation of text and header or footer; only the appearance when viewed out of focus is important.

This refers to the switches headinclude and footinclude which control whether the header and footer area, respectively, are considered part of the document margins or of the text body. By default these switches are off so that header and footer don’t intrude on the text body. But as soon as you enable a separating line, the switches are toggled automatically and pull the header/footer into the text body!

I certainly don’t agree with the KOMA-Script guide’s “easy decision” in this respect. I would expect that pulling marginal stuff into the text body is either based on total height or under exclusively manual control, not triggered automatically by a thin visual separation. Be that as it may, it’s easy to disable this undesirable behavior once you’re aware of it:

\KOMAoptions{
  headsepline, footsepline,
  headinclude=false, footinclude=false
}
\recalctypearea

The explicit \recalctypearea is required to update the page layout after the switches have been manually disabled. I updated the style file in KomaLetterSample.zip accordingly and recreated the sample output, KomaLetterSample.pdf. Note that there appears to be a genuine bug in KOMA-Script 1766 related to this behavior where only footsepline but not headsepline triggers the layout alteration. I expect this will be fixed in a future release.

While I was at it, I also updated the sample bank connection in the letter head to the new European BIC/IBAN numbers, and added a variety of style adjustments such as increased indentation from my general LaTeX style file. Keep or remove them as you like.

∞ Scientific Languages & Java Memory

$
0
0

Programmers apparently double down on publishing articles and updates during summer vacations. Here’s the second link collection this month that’s exclusively concerned with software development, and mostly with Java…

Lambda the Ultimate recently linked and excerpted two excellent posts on scientific computing by Graydon Hoare. Published in March, the first article looks at the history of computer languages as well as Python’s current dominance as a dedicated slow scripting language, intended to be coupled with a fast systems programming language. The second article is new and examines the less popular alternative, “Goldilocks” languages designed to handle all tasks, from ancient Lisp to the new Julia.

Brian Goetz’s State of the Specialization is a first informal sketch of proposed enhancements to Java and the JVM to support generics over primitives, and eventually over the also proposed value types. That’s a highly desirable fix for one of Java’s most obvious remaining defects compared to C#.

Lucy Carey’s Jaxenter article alerted me to Jan Kotek’s MapDB project, a very interesting Java database engine that can transparently serialize concurrent Java collections. MapDB originated as JDBM4 which itself was the latest of Kotek’s (G)DBM ports. Although I have yet to try it myself, MapDB looks like an ideal solution for simple local data persistence.

Aleksey Shipilёv’s Java Memory Model Pragmatics is a “very long transcript for a very long talk” that explains in detail how Java regulates concurrent memory access. Be sure to also check out his previous articles on optimization if you haven’t yet.

Lastly, the first official standard for Google’s new browser-based Dart language has been released. Standard ECMA-408: Dart Programming Language is free to download, like all ECMA standards. Hopefully the public standard will encourage browsers other than Chrome to integrate Dart support, though note that you can already compile to JavaScript.

(Updated Developer Links and Java Links & Tools)

Sony Alpha 7R & SEL-2470Z

$
0
0

After 1.5 years with the Sony NEX-7 (APS-C format) I realized I wouldn’t stop boring people with my amateur pictures anytime soon, so I might as well upgrade to a full-frame camera. I considered the new Nikon D810, a traditional DSLR that got excellent reviews, but eventually settled for the mirrorless Sony Alpha 7R using the same 36 megapixel sensor.

The absence of a mirror makes the A7R body much smaller, lighter, and cheaper – without actually impeding functionality in any way. Mirrors were necessary with chemical film but a digital sensor’s image can be directly projected onto an electronic viewfinder, so there really is no need for a mirror in any digital camera. I suspect their continued presence in high-end models is a cargo cult, with photographers slow to admit their cherished signature technology is an obsolete relic, and manufacturers happy to extract a nice extra profit for as long as they can.

You can see the Sony NEX-7 (left) and A7R (right) next to each other in the iPhone picture below. Both cameras are lying on their left side; the A7R is bulkier only because of the optional vertical grip (Sony VG-C1EM) mounted at the bottom. Without it the bodies are nearly identical in size. Here the NEX-7 shows the SEL-18200 (f/3.5–6.3) ultrazoom lens while the A7R has the SEL-2470Z medium zoom, a splendid Carl Zeiss lens with constant f/4 across its 24–70 mm range.

Sony NEX-7 & Alpha 7R

The optional grip isn’t just for convenient vertical operation, but more importantly stores two batteries (Sony NP-FW50) instead of the single one that fits in the camera body. The A7R draws rather a lot of power, and you shouldn’t expect one battery to last through an entire excursion. The grip switches automatically to the second battery once the first is exhausted, saving you from swapping it out manually. Annoyingly, Sony got stingy on accessories: the A7R comes only with a USB charger – which doesn’t work for the grip! At least I could reuse my NEX-7 batteries and stand-alone charger, so I did not have to buy these extras. (There are cheaper third-party options, too.)

I could also reuse my circular polarizing filter and my cherished Marumi DHG Achromat +3/+5 macro filters, as the SEL-2470Z has the same 67 mm diameter as the SEL-18200. Given the lower maximum zoom of 70 mm I’ll probably use the +5 filter a lot more than I did with the 200 mm lens. Sony makes another new constant f/4 zoom lens for the A7 series covering the 70-200 mm range but I don’t have that one yet, and it’s also a different diameter (72 mm). Note that you’ll want to check the firmware version as soon as you get any A7 model – mine was still at 1.0 even though the essential 1.02 update had been out since March.

Impressions & Gallery

Overall I got a great first impression of the A7R and SEL-2470Z. There’s finally a proper Auto ISO mode for manual exposure which had been inexplicably missing from the NEX-7, and even a dedicated exposure compensation dial which is applied on top of the automatically measured ISO value. Autofocus is somewhat slow but no worse than the NEX-7, and manual focus adjustment (DMF, “direct manual focus”) is a dream with the automatically zooming 2.4 MP OLED viewfinder.

If the NEX-7 and SEL-18200 often had me fighting to get enough light onto the sensor, the A7R and SEL-2470Z were often too bright even on a cloudy day. Having to raise rather than lower the shutter speed for optimal exposure was quite a new experience! Nor were darker situations in any way problematic – even the ISO 25600 exposure in the gallery below looks decent enough. At reasonable ISO levels noise practically does not exist, despite the very high sensor resolution.

And here’s a sample gallery. For my first trip with the A7R I went to the nearby Ilzschleife, a place just north of Passau where the small but navigable river Ilz almost loops back upon itself before flowing into the Danube. I used the polarizing filter on all pictures, and the +5 macro filter for one picture. The usual rules & procedures apply for viewing the pictures…

  • All images are stored as full-size JPEGs in this Google+ gallery.
  • You do not need to log into Google or Google+ to view the gallery!
  • Click on each image to see a full-size JPEG with my description, if any.
  • When viewing a single image, click “Photo Details” for basic EXIF data.
  • Use the magnifying glass icon to zoom in, down to individual pixels.

∞ Win32 High DPI & FontForge

$
0
0

Over at Dr. Dobb’s, Gastón Hillar has published two articles on Windows 8.1 desktop applications that properly support high DPI displays. Part 1 covers basic concepts, and part 2 shows C/C++ sample code for the Win32 API. The first article is strongly recommended for all Windows developers. The second one, while no doubt useful, sadly illustrates that the default way of Windows programming in 2014 still employs a portable macro assembler from the 1970s with an equally primitive API from the 1980s…

George Williams’ monumental X11-based font editor FontForge has received a new (unofficial) Windows port by Jeremy Tan. It’s derived from Matthew Petroff’s older port but significantly improved, for example with support for multiple Windows drives – a concept that Unix and therefore the original X11 version lack. Whether Unix or Windows, even people who don’t plan on editing fonts will want to keep a copy of FontForge around due to its comprehensive font conversion features.

In the random links department, first we have a list of popular websites that appear to run HTML5 Canvas finger-printing scripts. This ingenious covert scheme for persistent user tracking is explained in The Web Never Forgets (PDF), among others.

User interfaces provide a more overt opportunity to annoy users. The Rise of the UX Torturer draws the undeniable parallels of advertising-driven UI design with medieval torture chambers. Finally, Elyot Grant’s The Role of Luck analyzes the need for some randomness in competitive games, and how to provide it without just rolling dice all the time. A very stimulating read for game designers.

(Updated High DPI Settings in Windows and Typography Links)

Aperture & Macro Photography

$
0
0

The constant maximal f/4 aperture of the Sony Alpha 7R’s two standard zoom lenses – SEL-2470Z (24–70 mm) and SEL-70200G (70–200 mm) – allows for a wonderful amount of light and great bokeh. However, one area where it’s not terribly useful is macro photography with screw-on magnification lenses, in my case the Marumi DHG Achromat +5. The macro lens itself already greatly compresses the focus depth, and coupled with a high aperature you may end up with a focal plane that’s literally a tenth of a millimeter deep before visible blurring begins.

Seeing a largish spider (~4 cm leg span) that did me the courtesy of hanging out in a convenient location, I decided to take a few comparison shots at aperatures of f/4, f/6.3, and f/10 with increasing ISOs to compensate. All shots use the SEL-2470Z at its full 70 mm extension with the Achromat +5. Shutter speed is 1/100 sec to allow for hand-held shooting. You can find the gallery with the original pictures at the end of this post, in case you like spiders. Here are cropped details from the same two front legs for each of the three settings:

Spider Apertures

Spider Apertures (click for original size)

The embedded image is a small scaled-down JPEG. Click to see the original 1.4 MB image, saved as PNG to avoid recompression errors. As usual I enabled lens correction and punched up the colors in Adobe Lightroom but performed no other post-processing, in particular no smoothing to eliminate grain. From left to right, these are the parameters and results:

  • f/4, ISO 800 — Only a short section of the legs is in focus at all. The web strands in the background are barely visible.
  • f/6.3, ISO 2000 — The majority of the visible leg area is in focus. You can see high-ISO grain starting to creep in but it’s not bad yet.
  • f/10, ISO 5000 — Technically the focus depth is extended further, but that’s only apparent from the web strands in the background. Severe grain obliterates most details on the legs.

My takeway is to shut down the aperture as much as possible for macro photography, except if that would drive the ISO above 2000 or so. I recommend that users of macro lenses should take a few test shots with their own cameras to determine the ISO limits for keeping fine details. Here’s where you can find the original three spider pictures:

  • All images are stored as full-size JPEGs in this Google+ gallery.
  • You do not need to log into Google or Google+ to view the gallery!
  • Click on each image to see a full-size JPEG with my description, if any.
  • When viewing a single image, click “Photo Details” for basic EXIF data.
  • Use the magnifying glass icon to zoom in, down to individual pixels.

Meanwhile I also posted a few iPhone 5S pictures on Google+ that I had not previously linked from my blog. These show two splendid baroque church buildings in the vicinity, the Metten monastery and the Asam basilica in Osterhofen. While no comparison to the Sony Alpha 7R (or the NEX-7 for that matter), the iPhone 5S camera delivers a surprising amount of brightness and detail.

Passau Veste Oberhaus

$
0
0

Founded in 1219 to overlook Passau, Veste Oberhaus is a true medieval fortress at its core. That’s unmistakable when you try to reach the castle: the ascent covering a height of 100 meters resembles Frodo’s journey into Mordor, except with more sunshine and American tourists. As for the way back down, it’s so steep that it took me a few visits before I could stop clinging to the walls and step confidently.

The castle houses a permanent exhibition on Passau’s ancient and medieval history, and currently also a special exhibition on the 19th century and World War I. After a recent trip with just the iPhone 5s I decided it would be worth returning with my proper camera, the Sony Alpha 7R and its standard SEL-2470Z zoom lens. I took pictures covering most of the ascent (just for fun) as well as the splendid view from the observation tower, and some interesting exhibition pieces mostly from military history. Below is a downsized sample of the castle entrance:

Veste Oberhaus

The usual rules & procedures apply for viewing the pictures…

  • All Sony A7R images are stored as full-size JPEGs in this Google+ gallery.
  • All iPhone 5s images are attached to this Google+ post, including a lucky shot of a sunning lizard.
  • You do not need to log into Google or Google+ to view the galleries!
  • Click on each image to see a full-size JPEG with my description, if any.
  • When viewing a single image, click “Photo Details” for basic EXIF data.
  • Use the magnifying glass icon to zoom in, down to individual pixels.

Windows 10 Announced

$
0
0

Microsoft has officially announced the successor to the ill-conceived Windows 8, codenamed “Threshold” and now surprisingly numbered Windows 10 rather than 9. You see, “Windows 9” also matches the start of “Windows 95” and “Windows 98”… and a lot of short-sighted coders used a startsWith string check to determine Windows versions. Also, the internal version number increments to 6.4 from 6.3 in Windows 8.1, presumably to avoid the popular major/minor version check goof.

Windows 10 won’t be released until late next year. Then it’s supposed to run on any device, “from the Internet of Things to servers in enterprise datacenters,” and from small touch screens to large mouse & keyboard systems to screenless devices. If that sounds incredible, that’s because it is… and the announcement quickly backpedals to call Windows 10 a product family rather than a single OS, and without a unified GUI either. The developer-facing API, an improved Windows Runtime (WinRT), is what’s supposed to be unified:

And across this breadth of devices, we are delivering one application platform for our developers. Whether you’re building a game or a line of business application, there will be one way to write a universal app that targets the entire family. There will be one store, one way for applications to be discovered, purchased and updated across all of these devices.

There’s one huge catch in this model: you, the developer, are obliged to create and package all the distinct user experiences and connectivity plumbings that are required for each different platform. Despite the PR insinuations, an application designed for a traditional Windows desktop will not magically run unchanged on a smartphone or web-enabled refrigerator, no matter how similar the internal APIs are.

Universal WinRT Apps

How about the opposite route, though? As promised Windows 10 will finally let WinRT (“Metro”) apps run in desktop windows, using all the familiar application and task switching controls. Those same “universal” apps should work on Windows Phone and Xbox One, meaning developers can now create a single application and deploy it to all Microsoft environments that meet certain minimum GUI requirements. The big question is, would they want to?

Windows desktop users want desktop applications which are usually written to the Win32 or .NET/WPF APIs. These are widely used and offer full access to Windows desktop capabilities whereas WinRT, from all I hear, is still relatively limited and immature. So maybe the trick is to leverage popular mobile apps that scale up to desktop screens? That might work for the upcoming Android laptops but on Windows, desktop is vastly more popular than mobile or the odd Xbox app. (Xbox One is bought for playing optimized console games, not for random cross-platform apps.)

Revolution Postponed

So for the time being my assessment is unchanged. It’s a relief that Microsoft will restore proper desktop functionality including the old Start menu, and I certainly applaud the effort to introduce a new API that solves long-standing problems such as high DPI support, a curated app store for casual users, and support for a variety of devices. However, as long as Win32 decisively dominates the Microsoft ecosystem, including Win32-based .NET APIs and popular Windows versions that don’t support WinRT, there’s little reason for developers to bet on another API.

Microsoft will have to deliver a WinRT API that’s so good and offers such an easy transition from Win32 that developers will voluntarily start the migration. And then, in order to extend Microsoft’s presence beyond the desktop, smart frameworks must flexibly cover the vast UX range from smartphones to multi-monitor PCs. Maybe the company is already preparing such things but so far I haven’t heard of them, and until all the pieces are in place the new unified API is a mere technical curiosity.

Further Reading

The blogosphere went wild with the announcement, so here’s a selection of links with more information. Despite my doubts about the unified application concept, Windows 10 certainly should be a worthwhile upgrade for current Windows users.

2014-10-04: Two more articles on the preview build. Microsoft’s Brandon LeBlanc has compiled new keyboard shortcuts, and Tim Anderson reports his first experiences with the preview.

Enter the E-Cigarette

$
0
0

Nicotine is an effective nootropic second only to caffeine in popularity. Lately it has been on the decline through little fault of its own, as the standard delivery method of burning or chewing tobacco is extremely unhealthy. Tobacco contains a powerful mix of carcinogenic chemicals (which do not include nicotine!), and inhaling smoke particles damages the entire respiratory system. Nicotine consumed without tobacco provides all the nootropic benefits with a vastly reduced health impact – hence the therapeutic use of nicotine patches as a substitute for smoking.

Indeed, aside from a vasoconstrictive effect that may be dangerous for weak constitutions and lethality in unrealistically high doses, the health effects of moderate nicotine intake appear more positive than negative overall. The advent of e-cigarettes should provide some motivation for intensified research, but so far the medical and political establishment seem to prefer demonizing nicotine wholesale, rather than providing advice for safe consumption. (Took them long enough to admit that moderate daily alcohol intake is generally beneficial…)

E-Cigarettes

Instead of burning dried tobacco leaves, e-cigarettes vaporize an e-liquid based on two harmless carrier substances (propylene glycol and vegetable glycerine) with added flavors and optionally nicotine, which is of course the whole point. The result is a vapor that can be safely inhaled without impacting lung capacity or causing other damage, while transmitting nicotine just as effectively as tobacco smoke. Below you see the two e-cigarette systems I tried so far.

E-Cigarettes

Click on image to show full-size version in new window

The top one is a Snoke, about the circumference of a tobacco cigarette but longer. The bottom one is a Dampfanstalt eLuv, about the size of a medium cigar. Both are German designs available at local stores. Looking at Amazon, there’s currently a huge variety of models by small companies all over the world, which will presumably go through the usual concentration process as the technology catches on.

These two devices are an interesting pair because they represent two different design approaches. What they have in common is a big separate battery block (left, unscrewed here) that can be charged via USB adapter. For smoking you screw on the mouthpiece on the right that contains liquid and heating unit, and that’s where the differences lie.

Snoke

The Snoke mouthpiece is a sealed disposable unit (cartomizer, here called “Caps”) that’s sold in bundles of four, in packages sized and styled like regular cigarette packs. Each cap contains flavored liquid with or without nicotine, and an automatic vaporizer that engages when the user inhales. Amusingly, a simulated burning cigarette tip lights up on the other end, complete with a crackling sound effect…

This system is extremely compact and convenient – it fits any pocket that holds a normal cigarette, and requires no manual operations whatsoever. Unfortunately the results are lackluster. I found the vapor of both flavors I tried (tobacco and menthol) rather thin, sharp and unpleasant, and far less rich in nicotine than the nominal 16 mg per cap would suggest. Possibly chainsmokers of filter cigarettes will disagree and find this mix exactly right, though.

Dampfanstalt

But I’m a cigar smoker, so on to the cigar-sized Dampfanstalt contraption! This cartomizer can be refilled by unscrewing the tip and dripping in liquid sold as separate 10 ml bottles. Visible inside the transparent tank, a wick transports liquid to the heating unit. The latter must be activated manually by pushing the prominent button on the battery unit. That button also must be pushed five times in quick succession to toggle standby mode – turning it off completely preserves battery charge. Finally, the cartomizer has a limited lifetime and needs to be replaced eventually.

Between button operation, manual refilling, and sheer size this unit is admittedly rather inconvenient. That’s all forgotten once you take a puff, though. The vapor produced from cigar-flavored liquid with 12 mg nicotine is smooth and strong, not quite on the level of a real cigar but considerably closer than the Snoke. I quickly switched to the Dampfanstalt model exclusively. It’s too early to evaluate longevity but so far I’m quite happy with this unexpected application of USB-charged electronics. (And if the eLuv is too small for you there are two even bigger sizes…)

2014-10-24: I forgot to mention the environmental effects of the vapor – because there really aren’t any. Tobacco smoke has a nasty habit of clinging to clothes and furniture for days, producing that stale smell that makes smokers’ rooms unpleasant even for smokers. As far as I can tell, e-liquid vapor does not linger at all. Whatever is not inhaled vanishes very quickly without a trace. This should make e-cigarettes suitable even for non-smoking environments.

Viewing all 253 articles
Browse latest View live