Quantcast
Channel: Kynosarges Weblog
Viewing all 253 articles
Browse latest View live

∞ Java 8, HTML5 & f-Stop

$
0
0

Here’s the latest roundup of new links in my articles and archives. From now on I’ll precede the titles of such posts with an infinity symbol (∞ U+221E) that looks a bit like links in a chain, so as to make them more obvious in the list of new posts.

Java for C# ProgrammersBaeldung’s Java 8 provides an exhaustive list of planned, implemented, and dropped features for the next major Java version.

Developer LinksCan I use… shows handy compatibility tables for support of HTML5, CSS3, SVG etc. in desktop and mobile browsers. Modernizr is a JavaScript library that detects and works around missing features.

Assorted Links — Photographer Matthew Coles has written extremely thorough explanations of the f-stop, depth of field, and more specialized subjects. You’ll finally learn how smartphones can take great pictures without a focusing mechanism!

And here are some bonus links that piled up over the last month…


Programming Languages in 2013

$
0
0

Andrew Binstock’s annual Rise And Fall of Languages analyzes Google Trends, the TIOBE index, and Ohloh’s coverage of 600,000 open-source projects to discover… that there wasn’t much to discover. Java and C++ continue their slow long-term decline, but as Daniel Lemire notes that decline is so slow that any year-over-year movement might as well be random noise. Moreover, it does not prevent both languages from retaining top spots in all measurements. In the case of Java, Binstock himself had acknowledged last October that a huge disconnect exists between Internet gossip and programming reality.

C# & JavaScript — There are also some curious disconnects between the TIOBE index (based on web searches) and other metrics. For example, RedMonk puts both C# and JavaScript near the top on both Stack Overflow and GitHub, yet TIOBE shows a downright ridiculous 1.6% for JavaScript and a continuing decline for C# since Microsoft started its War on .NET and lost developer trust. This is likely an endemic problem with evaluating raw web searches for languages that are typically used within bigger technological systems. Searches for HTML, CSS, jQuery, or Node.js would not count for JavaScript, and searches for .NET, ASP.NET, WPF, or Xamarin components would not count for C#. Indeed, TIOBE states under “Bugs & Change Requests” that it has explicitly rejected the inclusion of “Rails, jQuery, JSP, etc.” as search terms! Since few people do extensive JavaScript programming without jQuery, this is likely to drop a large portion of relevant searches.

Perl & Python — Binstock highlights a remarkable reversal of these two languages’ popularity among open-source projects. Python has established itself as “the top general-purpose scripting language” while Perl seems to virtually vanish. Python has become especially popular among data scientists, as Tal Yarkoni and Mikio Braun both attest. Interestingly, Braun points to Matlab licensing changes as one motivating factor: academics could no longer afford Matlab and had to start looking for alternatives. As for Perl… well, Siebert, Stefik & Slattery (2011) found that Perl users were unable to write programs more accurately than those using a language designed with syntax chosen randomly from the ASCII table. As I was never a fan of the “comic-book characters swearing” school of language design, I certainly won’t be missing Perl if it’s really vanishing!

Language Adoption — Also last year, Vivek Haldar discovered an interesting study by Meyerovich & Rabkin, Empirical Analysis of Programming Language Adoption. Evaluating open-source use and programmer surveys, the authors found that the availability of open-source libraries, existing code, and experience all strongly influence language selection – but language features such as performance, reliability, and semantics do not. Since language adoption also follows a power law, with a few languages used virtually everywhere, this confirms the empirically obvious fact that language quality itself will at best result in niche success. Mainstream popularity requires that the language receives good library support and successfully spreads to other niches, until it’s popular for being popular, Kardashian-style.

Replacing JavaScript — Finally, efforts continued to improve or replace this ubiquitous but frankly terrible language. John Resig has written a nice overview of Mozilla’s Asm.js: The JavaScript Compile Target, and Peter Bright confirms that Asm.js actually works, although with the expected caveats for pre-production software. On the high-level side, Gaston Hillar demonstrates how Microsoft’s TypeScript compiles to JavaScript. However, I rather doubt that many developers will want to adopt another Microsoft product to write cross-platform web applications. I think Google’s Dart is far more likely to emerge as the most popular sane alternative to JavaScript. Already natively supported in Chrome, Dart has now also been submitted for ECMA standardization which means other companies will have a precise and stable implementation target.

Oppo BDP-105D: Audiophile Blu-ray Player

$
0
0

The Oppo BDP-105D is one of the biggest, heaviest, most expensive Blu-ray players you can buy, but with a quality and feature list to match – including audiophile stereo reproduction. I recently got one of those to replace my PS3 and assorted other players, so here’s a mostly enthusiastic review with a brief description of my remaining AV setup to put things into context. The BDP-105 has the obligatory current and future-proofing capabilities, from 4K upscaling and 3D material to connectivity with home networks and various streaming services, but I’ll skip those here and focus on the noteworthy parts.

Player & Television

Virtually all disk formats other than DVD-RAM and HD-DVD are supported, in particular audio formats: CD (I own a lot), SACD (I own a few), HDCD, and DVD Audio. The BD-105 employs the ES9018 Sabre32 Reference DAC for audio conversion – indeed two of them, one for 7.1 surround sound and one for dedicated stereo outputs! And sure enough, that stereo sound is easily a match for my old Linn Ikemi, a pure CD player that had actually been more expensive despite lacking any video capabilities. (Today, Linn has dropped all digital disk players from its product lineup, focusing on its venerable analog turntables and digital streaming sources instead.) Befitting an audiophile device, the BDP-105 is passively cooled which is partly responsible for its bulk and weight.

Some more weight is due to the built-in headphone amplifier, a sadly half-baked feature of very limited use. Following purist doctrine, the BD-105 contains no tone controls whatsoever, meaning I can’t get the bass boost I consider necessary to recreate natural-sounding bass with most sources and headphones. Worse, there is no Dolby Headphone or other virtual surround processor of any kind, meaning the headphone outlet is perfectly useless for most movies. (If you doubt that, try listening to a modern movie with a good Dolby Headphone amplifier just once. I found it impossible to go back.) Rounding out the list of deficiencies, the output level is not great despite the dedicated amplifier. Hitting the volume limit is a distinct possibility with high-end headphones. The sound reproduction itself is flawless to be sure, but that’s little use unless you only listen to stereo sources, don’t want a strong bass, and have unusually efficient headphones for a high-end setup.

Fortunately, that’s the only disappointment. The BDP-105’s video section is as impressive as its audio decoder. HDMI Deep Color is supported for future sources and TVs, but more relevant right now is the excellent Y’CbCr 4:4:4 output with proper 1080p/24 cadence for movies. On my beloved Sony KDL-40W4000, a discontinued LCD TV with deep blacks and CRT-level response times, I could use “Cinema” mode with “Color Space: Wide” and otherwise almost entirely neutral settings to obtain a great-looking and well-calibrated picture. (On my old PS3, “Color Space: Standard” and a warmer tint looked best but inferior to the Oppo.) Calibration was handled by the Spears & Munsil HD Benchmark (2nd ed.), a very thorough test that the good folks at vendor JvB Digital threw in with my purchase. JvB also offer DVD and Blu-ray region unlocking for Oppos and many other players, essential to anyone not living in America and reason enough for me to pay for shipping from the Netherlands.

Lastly, the BDP-105D includes two noteworthy video goodies. One is sharpening and anti-aliasing courtesy of VRS ClearView. Here I must say I didn’t notice much positive effect with my sources and display, so I left it off. Not so with the other technology, Darbee Visual Presence which distinguishes the BDP-105D model from the plain DBP-105. This adaptive contrast enhancement slightly deepens shadows already existing in the image, emphasizing structures and contours without affecting the overall brightness or color balance. I found that very high settings could cause flickering artifacts, but at 60% (out of 120%) Darbee adds a stunning yet completely natural-looking extra bit of clarity that brings out the fine details in HD images. Absolutely worth the extra price.

Speakers & Headphones

My speaker setup is driven by an old stereo workhorse, the NAD C 372 integrated amplifier with 150 continuous watts per channel. I’m bypassing its simple tone controls entirely, and instead rely on an unusual feature for tone adjustment: the NAD’s pre-amp section is routed externally into the power-amp section. The two pairs of standard cinch connections are normally bridged, but you can remove the bridge and insert any sound processor you like, allowing that processor to operate on any selected input source. I’m adding bass amplification with the Nubert ATM 102, a module that’s designed to precisely match the acoustic properties of my nuLine 102 speakers (discontinued, but like the nuLine 84 with one more chassis). This combination sounds as clean and precise as any high-end speakers I’ve heard, yet produces rich and deep bass comparable to live concerts. AV receivers generally don’t offer this kind of external processing option, which is one reason why I’m still sticking with my old stereo brick. Nubert is a German mail-order loudspeaker specialist, by the way, and definitely worth considering if you’re in Central Europe.

My headphone setup is based on the Philips SBC HD1500U. Sadly it’s no longer in production either, but here’s the PDF manual with specifications. This set was marketed for its wireless headphones (fairly decent though heavy), but its real attraction is the powerful amplifier with Dolby Headphone capability. I found it handles even demanding high-end headphones with aplomb, such as my current AKG K701 (also out of production, see the similar successor model) that’s a bit much for the Oppo. Most importantly, the surround processor splendidly reproduces the expansive space of cinema soundtracks in multi-channel Dolby Digital or DTS formats. The only drawback are the now-outdated digital inputs (S/PDIF and optical), meaning you only get basic DD/DTS from high-bandwidth formats such as Dolby TrueHD or DTS HD Master Audio that require HDMI for their extra bits. I’m still looking for a good replacement headphone amplifier that supports Dolby Headphone and all HDMI formats – though maybe I should check out some of Yamaha’s copious Silent Cinema products, too.

How Great Was Alexander?

$
0
0

Following his observations on Napoleon, sociologist Randall Collins has posted another insightful article on one of history’s greatest warlords: What Made Alexander Great? Once again, I recommend you take an hour or two to read the whole thing. Below follows a summary with noteworthy excerpts.

Philip’s Groundwork

Alexander’s father Philip laid the groundwork to his son’s success. His importance can hardly be overestimated.

Philip’s Macedonian army, which he put together between 360 and 336 BC, incorporated all the most advanced improvements. Most importantly, he added heavy cavalry, operating on both flanks with the phalanx in the center. Philip’s cavalry were not just for chasing-down after the enemy broke ranks, but for breaking the enemy formation itself. Philip was one of the first to perfect a combined-arms battle tactic: the phalanx would engage and stymie the enemy’s massed formation, whereupon the cavalry would break it open on the flanks or rear.

Reserves and flanking maneuvers had been famously introduced by Epaminondas of Thebes where Philip had been hostage as a youth. Philip expanded on these tactics, and also adopted the most advanced Greek siege techniques of his time. Finally, his soldiers carried most of their own equipment and supplies, greatly increasing marching speed over other armies with their bulky wagon trains. Note how Alexander built on not one but two famous predecessors, one constrained by his relatively minor theater of operations, the other conveniently assassinated at just the right time – time for a decisive campaign against a crumbling empire before the new tactics had become commonplace.

The late-blooming Macedonia stepped on the scene as the squabbling Greek city-states had become permanently deadlocked and the vast Persian empire had reached the administrative limits of its expansion. Persia had failed to project sufficient force into Greece against strong naval resistance in the early 5th century BC, and the Greek wars for hegemony in the 4th century had ended in stalemate. On this stage Macedonia could grow by swallowing the barbarian inland areas to the north which the established powers ignored.

Philip, who grew up as a hostage in one of the civilized city-states, had an eye for what counted there; after returning to Macedon, he made a point of conquering barbarian land that had gold mines, as well as seaports as far as the straits, where the grain trade passed upon which Athens and the other Greek city-states depended. In short, he started by becoming the big frog in a small pond, while learning the military and cultural techniques of his more civilized neighbours, and combining them with the advantages he could see on the periphery. At a point reached around 340 BC, the city-states woke up to find that their biggest threat was not Persia, nor one of their own civilized powers, but a semi-barbarian upstart, whose armies and resources were now bigger and better than their own.

Alexander’s Success

Both Philip and Alexander expanded largely by diplomacy, reinforced by the occasional victorious battle and razed city. For Alexander diplomacy was also essential since a long-distance army expedition could survive only if locals were recruited in advance to provide supplies along the way. The creation of empires out of many small diplomatic alliances is a recurring theme in antiquity. Coming out of the semi-barbarian uplands of Iran, Cyrus built the Persian empire in much the same way as Philip built Macedonia’s, leaving minor potentates in power in return for supplies and tributes. (The Roman empire also relied heavily on alliances with lesser neighbors, as detailed by Edward N. Luttwak.)

By the time Alexander gained the throne, an invasion of Persia had long been mooted in Greece – not because Persia was still a threat but, on the contrary, because the Mediterranean was already filled with Greek colonies and the rising powers of Rome and Carthage in the west. In comparison, Persia provided the easiest target to dispose of surplus youths and idle mercenaries. For Alexander’s complete success, it was also essential that Persia was already sufficiently civilized to support a conqueror and his armies.

Sheer military force cannot take over a territory before it has developed to an economic level at which the conquering forces can be sustained. At the cusp of civilization, large armies couldn’t even traverse such places if economic organization isn’t complex enough. Conversely, a state with a strong enough infrastructure to support its military rulers also can support a conquering army. No Greek general, like Alexander or anyone else, could have conquered an empire spreading into the Iranian plateau and beyond into Central Asia, in the 500s BC when those places were still isolated agricultural oases amidst tribes and pastoralists. It required the intermediate step such as Cyrus took, to build the logistics networks. A person-centered way of saying this would be: no Cyrus, no Alexander.

Alexander knew how to use enemy logistics in his favor, too. At Issus he won against a Persian army that was already disintegrating and out of supply, forced to move away from Darius’s chosen battleground near the Syrian Gates because Alexander had simply left them standing there for weeks. Once battles were joined, Alexander turned the Persians’ greater numbers against them by crushing one weak spot where he could employ local superiority, thus causing panic to spread through the densely packed crowds. Persian strategy preferred huge numbers over disciplined organization in an attempt to overawe enemies before a battle. Against an enemy who refused to be overawed, that was a recipe for failure.

Alexander’s Demise

Collins ends with observations on Alexander’s personal character, his famous antics and eventual clashes with his fellow Macedonians. Many of these episodes are most entertainingly put on display in Oliver Stone’s 2004 epic Alexander (just make sure to get the “Final Cut” which actually makes sense).

Now Alexander is in a structural bind. As Persian King, and in constant diplomacy playing King of Kings to the chieftains around him, he is caught in the ceremonial that exalts him. As leader of the world’s best military, he needs to keep up the solidarity of his Companions. The ambiguity of that name – more apparent to us than it would have been at the time – displays the two dimensions that were gradually coming apart: his companion buddies, a fraternity of fellow-carousers, fighters who have each other’s back; and the purely formal designation, members of the elite with privileged access to the King.

Binge drinking eventually killed him, it seems. With the amount of alcohol routinely downed for fellowship’s sake, no disease or poisoning would have been necessary.

The triumphant return to the center of the Empire was one carousing celebration after another. There was a drinking contest with a prize; the winner drank 12 quarts of wine and died in three days; another 40 guests died because they were too drunk to cover themselves in a sudden storm of cold weather. At another great feast, featuring 3000 entertainers imported from Greece, Alexander’s closest friend Hephaestion fell ill after swallowing an entire flagon. […] Someone stepped forward, one of the original Macedonian Companions, inviting him on an all-night drinking binge. They did it again the next night. Alexander woke up with a fever, steadily worsened, and died. It was alcohol poisoning, of course – literally drinking himself to death, like his companions.

Lasting Impact

Since Alexander’s empire quickly broke apart after his death, his lasting impact was the spread of Greek cultural and commercial connections eastward. As it turned out, this first eased Rome’s eastern expansion once it had become involved in Greek affairs, and later provided the birthplace of Christianity.

The inadvertent consequence of Alexander’s conquest was to create the conditions for the linguistically unified networks that became the great universalistic religion of the West. The panhellenic Greek spokesmen who in the 300s BC advocated colonizing land won from the Persian Empire thought they were exporting Greek democracy. This did not happen. What got created, instead, was a cosmopolitan network structure, with Greek as its lingua franca. In it the very idea of universalism – of a religion free from worldly entanglements and local loyalties – could take hold.

Finally, Collins discusses the obligatory grognard question, “Alexander versus Napoleon: Who would win?” The answer is sadly inconclusive…

On an ancient battlefield, Napoleon would have been too small to play much part. On a modern battlefield, Alexander would have been one of the wild barbarians whose cavalry charge got mowed down by Napoleon’s artillery. Maybe he was, in the form of one of the native armies Napoleon annihilated in Egypt or Syria. Alexander won all his battles, Napoleon lost at least one big one. But Alexander fought perhaps a third as many battles, all of them one-sided, the most advanced military organization of its day against inferior ones. Napoleon fought armies much like his own, and towards the latter part of his career, his enemies caught up with his best techniques. It is foolish to attribute their respective records to such transcendental impossibility as sheer decontextualized talent. […]

They lived on opposite sides of a moral divide. Alexander was far more personally cruel than Napoleon, or other modern people, could be. Getting into Alexander’s world makes us realize how different are human beings under different social circumstances. Today someone like Alexander would be on death row. Napoleon one could have liked.

Sharing Buttons, Take Two

$
0
0

In December 2012 I had removed all social network sharing buttons from my website and weblog, due to lack of use. During these last fourteen months, visitor counts have been growing briskly and referrer activity shows that some people actually occasionally come to this obscure domain from Twitter or Facebook.

Part of my decision to remove sharing buttons was that the pages on my main website are nicely static and script-free, except for Google Analytics and some minimal JavaScript code for responsive layout. But last summer I found an informative post by Guillermo Garron with passive sharing links for the three most important networks – Twitter, Google+, and Facebook. I was aware of the passive Twitter link but not of the other two. His links work well and have a tiny impact on page loading, so I decided to add them to the footer of my static pages.

On the weblog, I admit I couldn’t be bothered since Jetpack’s default integration of social networks is so convenient. So I simply added the equivalent three buttons back to the footer of each post and page. Let’s see if they get some use this time around…

2014-02-13: Couldn’t resist tinkering some more and added Disqus comments to the main website. That’s a whole lot of remote JavaScript but I hope the asynchronous loading and placement at the bottom of the page will minimize any distractions. I had originally intended for visitors to comment on the latest weblog entry for any given page, but few people ever seemed to discover that link. The Disqus block is hard to overlook!

WordPress Theme on Static Site

$
0
0

Just finished a major redesign of the original Kynosarges website. I might claim its extremely basic old layout was the pinnacle of suave hipster minimalism, except it was actually because I’m clueless about web design. Seeing the pretty weblog here I wanted to have the same look for the static pages on the old website. Yet I didn’t want to rely on a slow and fragile PHP/MySQL contraption for all my content, and besides WordPress doesn’t support my old canonical URLs that all end in .html. On the other hand, the WordPress theme files are too complex to adapt for my existing pages. What to do?

The solution presented itself when I looked at the generated HTML page for one of my WordPress posts, ignoring the obscure mechanism behind it. For all its underlying complexity, WordPress generates fairly clean and simple HTML. I could quite easily replicate the structure in the XSLT transformation that builds my static pages. I kept the class names used for style tagging but left out all the dynamic parts that would require database access or WordPress connectivity, replacing them with static content. Finally, I linked the pages to the same ordinary CSS and JavaScript files used by my WordPress theme, adding just a few rules to handle specialties such as indented paragraphs without spacing.

Painless Update (mostly)

The whole procedure was surprisingly painless and took less than two days. Indeed, a good part of that time was spent on figuring out when each page had originally been created, as I had foolishly kept only the last modification dates. To find the first publications of older pages I had to go spelunking in my project histories and the Wayback Machine’s search robot records. I think they’re all fairly accurate now, including the very first page I ever put on the Internet. (It’s for a silly strategy game, of course.)

Website and weblog are now both presented in the same pretty responsive package. There’s a single combined About page, and the website got the nice Twitter timeline. Happily, the invaluable table scrolling trick still works, and the recent novelties are present as well. The CSS fly-out menu were replaced with JavaScript-powered drop-down menus, though. I tried to fix all page content that needed adjustment for the new layout – do let me know if anything doesn’t quite work yet.

One known problem can’t be fixed until Google fixes its browser. Surprisingly, Chrome is the only modern browser left that doesn’t support automatic hyphenation, meaning my justified text layout sometimes causes excessively spaced characters and words. I tried to ameliorate the worst cases by sprinkling soft hyphens (­ U+00AD) throughout long directory names and the like, but ultimately Google needs to wake up and add this badly needed feature.

Static WordPress (sort of)

Building static sites with WordPress is indeed possible but works much like the old sculptor joke: take a block of marble and chop off everything that doesn’t look like a horse. I essentially took a dynamic WordPress site and threw away all the dynamic parts…

  1. Build a regular dynamic WordPress site, tweaking the style until you’re happy.
  2. Save a generated HTML page. Adopt the structure and class names in your static pages.
  3. Link your static pages to the original WordPress theme files (CSS with some JS support).
  4. Replace any dynamic widgets with static or non-WordPress equivalents.
  5. Behold the Frankensite – static pages with a WordPress theme!

Static sites don’t need to lack all dynamic functionality, of course. I’ve integrated Disqus comments and my Twitter timeline, for example. You just can’t use any dynamic features provided by WordPress, as that system expects all content to come from its databases. You can only use those WordPress features (specifically layout and styling) that apply to an HTML page after WordPress would have assembled it from database contents.

Default OpenGraph Image for Jetpack

$
0
0

Jetpack’s “Publicize” feature automatically adds a set of OpenGraph and Twitter Card tags to WordPress posts. (You don’t need to connect any publicize channels to get these tags, but you do need to activate the module itself in the Jetpack control panel.) Generally this works quite well, except for one situation: when you publish posts without images. Unfortunately I do that a lot, and the result is this:

<meta property="og:image" content="http://wordpress.com/i/blank.jpg" />

When Jetpack can’t find any image specifically associated with a post or page, it falls back on a transparent 200×200 image served up by WordPress.com. This makes Facebook happy because Facebook wants a complete set of OpenGraph tags, including an image whose size is at least 200×200 pixels, but obviously it’s not exactly ideal.

What I (and presumably any blogger) want is to present a nice thumbnail image for my blog whenever a more specific image is not available. By default, WordPress automatically creates a thumbnail for uploaded images such as the Akropolis header image above… but here’s the first catch: that thumbnail defaults to 150×150 pixels which is too small for Facebook. So I first went into Settings: Media and increased the thumbnail size to 200×200, then deleted and re-uploaded my header image.

The next step is to add an OpenGraph tag for the thumbnail image. On my static website where I directly control all the HTML content, that’s very easy. I just placed the following line among the OpenGraph tags in the HTML head (adjust for your own thumbnail URL):

<meta property="og:image" content="http://news.kynosarges.org/wp-content/uploads/AkropolisHeader-200x200.jpg" />

Now how to convince Jetpack to add this tag instead of its blank WordPress.com image? This requires customizing your theme’s functions.php file, as outlined by Jetpack’s Jeremey Herve. His example only covers the front page which never has a specific image associated with it, but we want to cover any post or page without a dedicated image. The required PHP code looks like this:

/**
 * Adds default OpenGraph image.
 * Christoph Nahr 2014-02-17
 * @return the argument string array, possibly modified
 */
function add_default_image( $tags ) {
    if ( $tags['og:image'][0] == "http://wordpress.com/i/blank.jpg" ) {
        // Remove the default blank image added by Jetpack
        unset( $tags['og:image'][0] );
        $tags['og:image'][0] = 'http://news.kynosarges.org/wp-content/uploads/AkropolisHeader-200x200.jpg';
    }
    return $tags;
}
add_filter( 'jetpack_open_graph_tags', 'add_default_image' );

Observe a few PHP catches. The og:image element of the $tags array is actually itself an array of strings, not just a simple string. If the blank picture was added it’s always the first (and only) element, with the PHP default index zero. As of Jetpack 2.8, you need to inspect function jetpack_og_get_image in file functions.­opengraph.­php to figure all this out.

I replace only that first image, just in case some other code has added more images. Unsetting the array element before changing it shouldn’t be technically necessary, but all the PHP experts seem to do it. Proper defensive coding would also require to check for the presence of the og:image element and for the type and size of its array value, but the definition of jetpack_og_get_image guarantees that the filter always sees a string array with at least one element, so we skip all that.

And that’s how you associate default OpenGraph images with WordPress posts!

∞ Pilcrows & Pointers

$
0
0

Typography Links: Keith Houston’s Shady Characters explores “the secret life of punctuation” in meticulous detail. His fascinating 2011 series The Pilcrow (¶) (part 2, part 3) traces the history of this ancient paragraph sign in the context of evolving European typography.

Font Squirrel catalogs a huge number of decent text fonts that are always free for commercial use, and often even allow free redistribution. Handy icons indicate the licensing status of each font. I think I’ll nab one of the high-tech fonts for my Star Chess re-release…

Assorted Links: The Tufts University’s Perseus Digital Library hosts virtually all Greek and Roman classics, along with commentary and interactive dictionaries (just click on any word!), as well as some later European and American texts.

Regina Nuzzo’s Statistical errors documents the murky history and present abuse of p-values, still a popular way to pretend that exploratory (at best) or fraudulent (at worst) scientific studies yielded meaningful results fit for publication.

Two of my Subscriptions recommendations joined forces in the latest Software Engineering Radio episode. Martin Thompson of the excellent Mechanical Sympathy blog talks about performance pitfalls on modern computer systems, such as complex pointer-heavy data structures that break caching.

Lastly, Veritasium posted a nice video showing that a substantial number of Facebook “Likes” actually come from “Like farms” – even when the page owner didn’t request them. Good for Facebook since page owners are now forced to pay for better promotion, so as to reach at least some legitimate fans!


Click Tracking with Google Analytics

$
0
0

I had enabled Google Analytics last September but, until recently, kept WordPress Statistics running as well. One reason was its terribly addictive up-to-the-minute stats view, but more importantly WP tracks outbound clicks and GA doesn’t. Until you write some code, that is.

Google Analytics can record hyperlink clicks (and all kinds of other stuff) using its event tracking mechanism. That’s conceptually simple: you just make a _gaq.push call for any HTML event you want to record. The problem is that you’d have to annotate every single outbound hyperlink with an onclick handler, and I certainly didn’t want to do that.

A web search for better alternatives came up strangely empty. People seem to either actually use manual annotation, or else go whole hog for a heavyweight jQuery solution. But as it turns out, it’s extremely easy to automatically and efficiently annotate links with just a few lines of JavaScript code. Here’s everything I appended to my original Google Analytics code snippet:

// track clicks on outbound links
function trackOutbound(e) {
  if (!e) var e = window.event;

  var target = e.target;
  if (!target || target.tagName != 'A') return;
  var href = target.href;
  if (!href || href.indexOf('kynosarges') >= 0) return;

  _gaq.push(['_trackEvent', 'outbound', 'click', href]);
}

// attach global event listener
if (window.addEventListener)
  window.addEventListener("mousedown", trackOutbound, false);
else if (window.attachEvent)
  window.attachEvent("onmousedown", trackOutbound);
else
  window.onmousedown = trackOutbound;

The first line in trackOutbound and the three if branches are for compatibility with old Internet Explorer versions. Those branches ask the top-level HTML window to call trackOutbound whenever any part of its surface gets clicked on. The mousedown event gets “bubbled up” to the top window as the first step in any link click, followed by mouseup and finally click. Tracking mousedown instead of click means our document is actually still present for the event handler – some developers found that early unloading can be a problem for click tracking. Inside trackOutbound, we check for a valid hyperlink request outside the Kynosarges domain, and if so push a _trackEvent for the URL.

To use this code in another domain, simply replace the Kynosarges test with something more appropriate for you. Once I had added the code, outbound clicks appeared immediately in the real-time event view of my Google Analytics dashboard. This code works on both my static website and on the WordPress blog without any modifications, so it should work elsewhere too. Note that I’m still using “classic” analytics (ga.js) as opposed to “universal” analytics (analytics.js) – the latter uses a different syntax.

The beauty of this solution is that we never rewrite the actual HTML contents – neither manually nor with a time-consuming global DOM manipulation on page load. Code executes only if the user actually clicks somewhere, and then only for the specific clicked element. I’m not a JavaScript expert but this is likely the most efficient and least disruptive way to track clicks.

Struct Performance 2014

$
0
0

Once again and probably for the last time, I’ve updated my .NET Struct Performance article with results for the latest batch of compilers and runtimes. Microsoft’s and Mono’s current CLRs are unchanged as expected, and so is the excellent MinGW gcc. Visual C++ 2013 gave a nasty shock with massive optimizer failures for user-defined types in both 32- and 64-bit mode. Oracle’s JDK 7u51 produced a smaller unpleasant surprise with an optimizer regression for primitive values that must have happened at some point after 7u13. Sadly, thus are the dangers of sensitive microbenchmarks. These unpredictable changes are one reason why I think it’s time to put my little test to bed. The other is my plan to finally upgrade the computer I had been testing on, so results would no longer be comparable anyway.

But for now, let’s look at what has become the most interesting part of a benchmark originally designed for the .NET Framework: the JavaScript results. All tested browsers (Chrome, Firefox, Internet Explorer) were once again faster in almost every test. Save for one small IE regression concerning primitive values, all current browsers now perform well within the ballpark of the Mono runtime. Chrome and Firefox are within 50% of equivalent tests for user-defined types, and even IE11 matches at least the slowest Mono result. The following chart, roughly organized by release years, illustrates the massive progress JavaScript has made towards other languages.

Struct Performance Chart

Click the image for its original size. As you see, only good C++ compilers (i.e. gcc in this test) are still significantly faster than JavaScript. The other languages are getting uncomfortably squeezed into a dwindling space between this universally available runtime and the domain of native code experts.

Firefox 27 delivered an especially remarkable result for “naked” double values (no user-defined types), taking only 2,270 msec compared to 1,030 for the C++ compilers. I suspect the runtime may have “cheated” by reinterpreting the specified floating-point variables as integers, as permitted by the test semantics. That would be a nice exploit of JavaScript’s flexible type system.

Tales from the Roman Republic

$
0
0

Rome’s messy transition from republic to principate has been well-documented by ancient authors and often revisited by modern ones. Since 1990 there has been a veritable explosion of historical fiction set in this era. I’ve devoured a good part of it, so here are some recommendations for your reading pleasure. The authors generally keep to the historical records where available, and reserve their imagination for filling in missing details. Review disclaimer: I’ve finished the entire series by Roberts and Saylor, but currently only the first book by Harris and two-and-a-half by McCullough.

Colleen McCullough — Famous for her 1977 novel The Thorn Birds, McCullough later wrote the Master of Rome series (1990–2007). The seven 1000-page bricks cover the last century of the Roman Republic, from the rise of Gaius Marius (110 BC) to the end of Marc Anthony (27 BC). Consequently, there is no single protagonist but rather a labyrinth of (more or less) famous names that might overwhelm those not already into Roman history. You should also be aware that McCullough loves explicit sex scenes, although they don’t dominate the narrative. Otherwise connoisseurs are in for a treat, in terms of both quantity and quality. This is the best and most comprehensive novelization of the Roman Republic.

John Maddox Roberts — Author of numerous historical and fantasy novels, Roberts is a renowned pulp fiction specialist. His thirteen SPQR mysteries (1990–2010) expertly merge the hard-boiled detective genre with the decline of the Roman Republic. They cover the years 70–46 BC through the eyes of a fictional protagonist, Decius Caecilius Metellus, who investigates crime on the side while pursuing a Roman nobleman’s political career. As you would expect the stories focus more on Decius’ rowdy adventures than historical events. Whether you find that acceptable is a matter of taste, but rest assured the swashbuckling is immensely entertaining. Here’s a typical quip from The River God’s Vengeance:

Marcus Porcius Cato was the enemy of all things modern or foreign. These things included sleeping late, eating well, bathing in hot water, and enjoying anything beautiful. He studied philosophy and even wrote philosophical tracts, but he was naturally attracted to the Stoics since they were the most disagreeable of all the Greeks.

Steven Saylor — Best known for the Roma Sub Rosa series, comprising ten main novels (1991–2008) and several short story collections and prequels. Similar to SQPR, this series also mixes the modern mystery genre with a Roman Republican setting (80–46 BC). The fictional protagonist Gordianus is rather less illustrious than Roberts’ Metellus, and conducts his criminal investigations for money rather than fun. Although Saylor seems to be more highly regarded than Roberts, I cannot share that judgment. His books are decent enough but the narrator has a bad case of modern American liberalism stuck in a Roman body, the prose sometimes veers off into stilted attempts at artfulness, and some of Saylor’s seemingly historical scenes are flat-out wrong. One example that stuck in my mind is his horrifying depiction of Roman galley slaves slowly dying in their own filth which he presents as normal for the era… except that ancient navies almost never used slave rowers. If they did it was in times of emergency, usually with the promise of freedom. Roberts correctly points this out, which is one reason I prefer his work to Saylor’s.

Robert Harris — Journalist and historian turned novelist, Harris recently started the Cicero trilogy (Imperium 2006, Lustrum 2009). Note that Conspirata is merely Lustrum retitled, not the third volume which remains to be published. The story traces the life of Marcus Tullius Cicero, narrated by his secretary Tiro who invented the first recorded shorthand for taking down speeches. The writing is somewhat dry compared to the preceding authors but that did not diminish my enjoyment. Cicero is easily the most fascinating man in the late Republic, provided you approve of his focus on law and politics rather than leading armies and beating up Gauls. As with McCullough’s novels, however, you should probably bring some knowledge of the general situation to appreciate the finer details.

Lastly, I would be remiss not to mention the splendid 1976 BBC TV series I Claudius, covering the antics of the early emperors (44 BC – 54 AD). Perhaps the BBC’s best production, the series is based on Robert Graves’ eponymous 1934 novel and its sequel. Anyone who hasn’t seen it yet should certainly get the DVD box set. I cannot honestly recommend Graves’ books which I found rather boring, but then again I also hated Moby Dick.

Overviews for HTML5 & Java SE 8

$
0
0

Two new entries in the Developer Books archive. MacDonald’s HTML5 intro is well-suited for people who already know HTML 4, and Horstmann’s Java SE 8 overview should remain useful even after Core Java has been updated. I recommend both books.

HTML5: The Missing Manual — Matthew MacDonald, O’Reilly 2013 (2nd ed.)

This is not a complete HTML5 reference, which would be difficult anyway for a “living standard” comprising many disparate technologies. Instead, MacDonald assumes his readers already know HTML 4 and focuses on new features introduced since then, whether in HTML5/CSS3 or as JavaScript libraries. His coverage usually takes the form of overviews with extensive tutorials and usage tips, plus numerous links to related articles and tools. Such overview books often are too brief and vague to be useful, but refreshingly that’s not the case here. MacDonald’s writing is certainly concise, but also densely packed with relevant and often surprising information.

I did notice two strange omissions that readers should be aware of. First, MacDonald briefly mentions RDFa but not its extremely widespread OpenGraph derivative. Second, MacDonald advises using normal ems instead of CSS3 “root ems” on grounds of IE7/8 compatibility. Jonathan Snook’s trick of combining px with rem would seem preferable. That aside, I can recommend HTML5: The Missing Manual, at least until Musciano & Kennedy deliver their promised HTML5 guide.

Java SE 8 for the Really Impatient — Cay S. Horstmann, Addison-Wesley 2014

As the name implies, this rather slim book gives a compact overview of new features in Java SE 8, as well as some of the more obscure changes in Java SE 7. Horstmann is as thorough and readable here as in the Core Java tomes he co-authors. He describes not just marquee features like lambdas that have been widely covered elsewhere, but also small important additions such as methods for unsigned arithmetic that I had been quite unaware of. The wisdom of compressing JavaFX into 30 pages is debatable, but Horstmann did take the opportunity to mention root em sizing.

What’s more, he finally admits that the Java browser plugin is a security risk whose time has passed. If anything he swings too far in the opposite direction, recommending HTML5 for targeting “a general audience” because “Java is no longer a viable platform for widespread distribution of client applications over the Internet.” True for mobile users but private JRE deployment still works fine for desktop users. Be that as it may, every Java programmer upgrading to SE 8 should benefit from this very useful book.

Hearthstone & Diablo III

$
0
0

I’m not a huge fan of most Blizzard titles, but I did play the hell (sorry) out of Diablo I/II and tried the free Starter Edition (a.k.a. playable demo) of Diablo III when it became available. Unfortunately I found it rather boring, and I was put off by Blizzard’s open contempt for people who wanted to play solo or didn’t like the new auction house. Two years later the company has reversed itself entirely, and as I reinstalled Battle.net to check out the first official Hearthstone release I decided to revisit Diablo III as well. So here are some first impressions from both games. 2014-03-17: Added a section with more extensive impressions from Diablo III after finishing the first act.

Hearthstone

Hearthstone is as nicely polished as you’d expect from Blizzard and free to boot, but ultimately still a Magic: The Gathering clone. This means you’re supposed to memorize a dozen heroes and approximately two billion cards in order to optimize your decks. That’s way too intellectually involved for me! The tutorial campaign was fairly entertaining but I lost all interest when Hearthstone presented me with pages upon pages of cards to build my deck from, just for my one tutorial character. I prefer simpler card/dice battling games without deck construction, like Ascension or Quarriors. However, if you do enjoy M:TG-style card collecting you should certainly give Hearthstone a try – it’s wildly popular with fans of this genre, and its free-to-play monetization scheme is reportedly extremely lenient.

Diablo III

Diablo III has improved greatly in two years of patching, mostly thanks to the ability to select a harder difficulty right from the start. Enforcing the completion of each level in sequence made the first one or two campaigns laughably easy for experienced players – a bizarre design failure of the entire series that most other action RPGs avoided. Curiously, the free Starter Edition doesn’t let you pick a difficulty level when starting out, but you can later switch from Normal to Hard while playing.

There were also numerous other welcome additions and changes since the game launched, notably the upcoming removal of the real money auction house and the corresponding rebalancing of items so you’re more likely to find good gear for your current character. Battle.net now even logs you in automatically, finally abandoning Blizzard’s habit of asking for the password on every start-up, as annoying as it is futile on a typical single-user machine.

Thanks to the imminent launch of the Reaper of Souls expansion, you can get the base game for just US $20 directly from Blizzard. I’m still contemplating getting the expansion on the PlayStation 4 (if I ever find one of these mythical consoles) since I prefer playing single-character action games with a controller, but $20 on Windows was too good to pass up.

Another DPI Scaling Failure

Disappointingly, even Blizzard still can’t figure out Windows DPI scaling in 2014. The Battle.net launcher is marked as DPI-aware but doesn’t scale at all, so you get tiny text at high DPI. However, Diablo III itself freezes the mouse cursor unless you manually check the Windows compatibility option “Disable display scaling on high DPI settings” on its desktop shortcut! That’s a pretty impressive show of incompetence, especially for a company that has specialized in Windows development for decades. On the bright side, I did not encounter any other technical issues in a few hours of playing.

Diablo III: First Act Complete

I finished the sizable first act of Diablo III over a rainy weekend, so here are some more impressions. First the bad news, namely one other technical issue I encountered. Despite Blizzard’s other changes Diablo III is still an online game, even in single-player campaign mode, and this means you can still get kicked out of your solo game when you are disconnected from Battle.net for any reason.

This happened to me once when I left the game idling for 15 minutes or so, although I don’t know if that was in fact the cause. You don’t lose any character stats or equipment or story progress, but you do have to restart from the last unlocked waypoint, with all maps and monsters reset and newly randomized. This is also what happens when you deliberately quit the game for any reason – all your character progress is saved but nothing else. So the game is still unsuitable for anyone who cannot reliably have gaming sessions of 30+ minutes without lengthy interruptions.

Gameplay: An ARPG Revolution

Now for the good news. My first impressions of the gameplay were positive, and it just kept getting better. Most notable are the long-standing nuisances of the ARPG genre that aren’t there:

  • Scrolls for Town Portal & Identify. Both features always work and are completely free, except for a small delay to prevent you from using them in the midst of battle.
  • Health potions with different sizes & stacking limits. There’s one size which always refills all your health, and I’ve stacked 31 potions so far. This is counterbalanced by a fairly lengthy cooldown timer, but that is counterbalanced by generous instant health drops from fallen enemies.
  • Any other potions. Your mana equivalents recharge automatically, aided by skills or equipped items, and the recharge speed regulates how often you can use your most powerful skills.
  • Continually recasting passive skills or auras. Any skill that has no instant effect is passive, and any passive skill is always on while selected. Simple as that, and a huge relief.
  • Manually increasing attributes when leveling up. Your character’s attributes automatically increase in the class-optimal distribution. This removes both busywork and a common trap for novices who don’t know the optimal attribute distribution.
  • Severe death penalties that make you throw your keyboard out of the window. Unless you’re playing a hardcore (= permanent death) character, you just suffer some easily repaired durability loss on your items. Then you can respawn wherever you like, including right over your corpse – and you’re even invulnerable for a few seconds to change position.

The near-total lack of penalties for dying might sound too lenient, but it’s no different from any other game that allows reloading. The traditional heavy gold or experience losses only add mandatory grind to make up for lost ground, and that quickly turns fun into frustration. Moreover, you do lose any temporary beneficial effects when you die, including another neat innovation: Pools of Reflection that increase experience gain not for a certain time but until you reach a certain number of XP. You can play as slowly and cautiously as you like, but you must avoid dying!

At least for the Demon Hunter class I’m playing (which is vicious fun and highly recommended), there’s also not one but two different mana equivalents, Hatred and Discipline, which enable different skills and recharge in different ways. This is such an obvious and effective way to diversify the “click on monster until dead” gameplay that in retrospect it’s amazing no one else thought of it.

The well-publicized replacement of skill trees with freely swappable skills and runes unlocked by leveling up works extremely well. Forcing the player to make permanent character choices might be RPG dogma but let’s face it, the choice between killing monsters by fire or electricity is not exactly dramatic soul-searching stuff. In this genre skills are simply another kind of weapon or armor, so it makes perfect sense to allow changing them as easily as any other equipment. Here, too, Diablo III has successfully shed an encumbering heritage from “real” RPGs that are really quite different games entirely.

Lastly, hard difficulty is balanced exactly right for a new character. It’s not overwhelming but does force me to use my character’s skills properly. I died only a few times when running into especially nasty elite mobs. Bosses are exceptionally powerful to be sure, but refreshingly not the tedious unkillable hitpoint tanks as in most ARPGs. And the world is well-stocked with varied monsters and side quests, so you aren’t generally running around looking for something to do. (I’m looking at you, Torchlight 2.) We’ll see if the remaining acts keep up this high level of quality, but so far I’m very impressed and willing to declare Diablo III the best ARPG I’ve yet played.

Grand Java SE 8 Update

$
0
0

Following Oracle’s long-awaited release of Java SE 8, I went through my Java-related articles and updated various links, file paths, and test results to the latest version. See Java Links & Tools for the current Oracle documentation and downloads. Most test results haven’t changed much, but neither has Oracle’s annoying default Windows JVM packaging. Here are some updates on this sorry subject:

Java Client VM — Java SE 8 is faster across the board in SciBench 2.0a. The Client VM gained 35% while the Server VM gained 15% (32 bit) to 20% (64 bit). While the Client VM could shrink the gap a bit, it’s still considerably slower under load. Empty startup times rose by an unnoticeable 20 msec, and I could no longer measure any startup difference at all between Client and Server VM.

The JRE once again got bigger, growing from ca. 40 to 50 MB for the 32-bit edition with the JDK’s Server VM. The new compact profiles don’t help, as they currently only check for API usage but do not actually subset the JRE (that’s supposed to be coming in Java SE 9). I did, however, delete a bunch of optional components listed in the JRE 8 Readme which saved over 20 MB in my test package.

Oracle Java on Windows — Sadly, no change there. The default 32-bit “online” installer for Windows still includes crapware, still lacks the Server VM, and still installs the useless and dangerous browser plugin. I revised the page for a better coverage of alternative options, though, including the new .tar.gz archives on the JRE 8 Downloads page. These come without installers which means you won’t get any unwanted software, but also no OS integration whatsoever. You’ll have to manually invoke java[w].exe to run Java applications. Not exactly ideal, but a nice option for power users.

Note: At the time of this writing, java.com still offers Java SE 7u51 and will do so until a future undetermined date. You’ll need to get Java SE 8 directly from JRE 8 Downloads. As for my own website, the big overview article Java for C# Programmers will take a few more days to update as well.

MIME Browser 1.3 Released

$
0
0

Version 1.3 of MIME Browser, my free EML message viewer written in JavaFX, is now available for download. The one big change is the upgrade to Java SE 8 with JavaFX 8 whose new Modena theme is fashionably flat and gray, as you can see in the screenshot below. (I made the blue hyperlinks a bit darker because I found the very light default color hard to read.)

MIME Browser

There were also a few minor bugfixes, two of them courtesy of the updated WebView in JavaFX 8.

  • Zooming now scales the entire HTML view, including images and spacing, not just text.
  • The HTML view’s vertical scrollbar now scales correctly in high DPI mode on Windows.
  • Days of month in the directory view now correctly start at one rather than zero. (Sorry!)

There was one annoying regression concerning developers. I had to remove UML diagrams from the Javadoc class reference because yWorks UML Doclet is incompatible with Java SE 8. Apparently the tool uses some internal com.sun.* classes that have changed. I hope it will get updated soon.


Sony Vaio Duo 13 Review

$
0
0

Microsoft’s latest attempt to fight the race to the bottom in Windows hardware has once again ended in a whimper, mostly because its own Surface and similar upmarket devices showcased the Metro part of Windows 8 that nobody wants. Anecdotally, I saw the elegant ultrabooks and hybrids vanishing again that had populated store shelves around the release of Windows 8. They were replaced with the usual heaps of cheap ugly plastic cases with dim non-touch TN panels and ancient spinny hard disks. Sadly, that’s apparently exactly what most Windows users want.

Sony used to be an exception, one of the few makers of expensive high-quality Windows PCs. But the company was already in bad financial shape and took another hit when Windows 8 failed to revive the PC market. Accordingly, Sony sold off its Vaio PC division to a Japanese investment group and now presumably pins its hopes on the runaway success of the PlayStation 4. That’s a pity because I thought the Sony Vaio Duo was the most interesting of all Windows 8 era hybrids: an elegant high-end ultrabook with a digitizer pen like Microsoft’s own Surface Pro, but with a permanently attached keyboard underneath a clever sliding mechanism.

Since there likely won’t be a successor, I decided to throw a hefty €2,100 at a Sony Vaio Duo 13 with almost all options maxed out. The major exception is the SSD where I stopped at a reasonable 256 GB. An ultrabook with its mobile dual-core CPU and integrated graphics clearly isn’t adequate (or intended) as a desktop replacement, but my search for that kind of portables came up empty as usual. Once they approach the power of my aging homebuilt Core i7 920 box they’re not just twice as expensive – they are also too big and heavy for actual mobile use, have a very short battery life, get much louder at equivalent load, and feature an inferior screen & keyboard to boot.

So right now there’s still no point trying to fully replace a stationary system, if you need one and have the room for it. First I planned to combine my desktop system with a light & quiet mobile device, synchronizing work between them. However, after using the Vaio Duo 13 for a week it proved sufficiently powerful for most tasks, so I pretty much abandoned the desktop altogether. The picture below shows iPad Air and Sony Vaio Duo 13 side by side – the screen is almost exactly twice as big, about the minimum size for conveniently arranging and operating Win32 desktop applications.

Vaio Duo vs iPad Air

iPad Air & Vaio Duo 13

Hardware Specifications

For a general overview of the Sony Vaio Duo 13 see the Engadget review. Sony allows customers to assemble Vaio PCs from a broad variety of hardware options. I mostly went near the top of the range which explains the rather shocking price of €2,100. The hardware specifications are broadly similar to the bigger MacBook Air variant with additional options.

  • CPU: Intel Core i7 4650U at 1.7 GHz, dual-core, hyperthreaded
  • GPU: Intel HD Graphics 5000, integrated (see AnandTech test)
  • RAM: 8 GB DDR3, dual-channel, 2 × 800 MHz
  • Storage: Samsung SSD MZNTE256HMHP, 256 GB, SATA III 6 GB/s
  • Audio: Realtek High Definition Audio, Sony ClearAudio+ post-processing, microphone & stereo speakers, headphone output
  • Video: Triluminos IPS LCD screen (13.3″ = 33.8 cm, 1920×1080 pixels), front & rear camera, HDMI output
  • Input: backlit keyboard, capacitive multitouch screen, N-Trig digitizer pen, Synaptics touchpad with gesture support
  • Sensors: device orientation, Huawei GPS, NXP near-field proximity
  • SD card reader, 2 × USB 3.0, Broadcom Bluetooth & Wifi 802.11abgn, Huawei 4G broadband

That’s basically everything you could wish for, although with only two USB ports you’ll want to get a powered USB hub if you expect to operate in “docked” mode with external mouse & keyboard and perhaps disk drives, printer, etc.

What’s this mobile CPU capable of? Games are problematic, as discussed below, but everything else runs with acceptable performance, from NetBeans to LibreOffice to even Adobe Lightroom. Compared to my desktop system there are certainly longer delays when importing and editing pictures, but never long enough to become annoying.

Battery & Noise

Mobile systems inevitably offer a much worse price/performance ratio than stationary ones, but in return I expect that they are truly mobile – light and silent, rarely needing to recharge. And the Vaio Duo 13 delivers. Its predecessor, the Vaio Duo 11, had offered a clip-on pack to double its battery life from 5 to 10 hours. Sony says the Vaio Duo 13 effectively has that extra pack built in, and in my experience that claim is justified. I can’t attest to exactly 10 hours, but I can certainly go for a whole day without recharging while in fanless operation.

What’s covered by fanless operation? Surprisingly, almost everything! Exceptions include games and other CPU-heavy loads which rarely occur in practice. Web browsing, text editing, and file operations (see below on the BitLocker caveat) tend to leave the device completely silent. This is true even in closed tablet mode when some exhaust vents are covered up, and even after I had set tablet mode to “Performance,” i.e. no precautionary speed limits. CPU and motherboard temperatures were around 40° Celsius or less. When the fan does activate it’s usually at a low noise level, and it soon turns off again. In practical operation the Vaio Duo 13 is mostly as silent as an iPad.

Hardware Quality

Sony makes some of the very few Windows notebooks that can compete with MacBooks in terms of hardware quality. The beautiful carbon fiber body looks and feels like a polished slab of granite, but is (just about) light enough to manipulate single-handedly. The fold-out hinge works smoothly and can likewise be operated with one hand. When opened the screen rests at precisely the right angle, and the entire size & weight is just about the maximum possible to operate conveniently without desk support.

The IPS display is truly magnificent, bright and colorful with good viewing angles, using the same Triluminos color technology as Sony’s Bravia TVs. Since it’s a glossy screen I also bought a preattached anti-glare protection film, which does not seem to impact display quality or touch input. Sony even provides a tasteful Windows desktop background that I prefer to the garish Windows 8.1 defaults.

Audio output is handled by an ordinary Realtek chip without third-party options such as Dolby Headphone. Sony does offer its own ClearAudio+ post-processing, though. I found the music preset not much inferior to my Asus Xonar DGX with the same headphones – powerful deep bass, treble a tad too loud, but a nice impression of space. Speaking of headphones, the Vaio’s output is completely free of noise and powerful enough to drive my big Sennheiser HD 380 Pro cans. The built-in speakers are solid but obviously you can’t expect much there.

The chicklet keyboard is excellent for its technology, with almost standard-sized keys, nice strong click points, and separate cursor and function keys. Something has to give in such a small chassis, though. The numeric key block is absent, and four navigation functions (Home, End, Page Up/Down) require an additional modifier key. The touchpad is likewise one of the better specimens I’ve seen, with precise pointer tracking despite its tiny size. The entire surface doubles as a button – very convenient, but it took me a while to discover that the bottom area represents right clicks. Gesture support for two-finger scrolling and pinch-zooming is another noteworthy feature.

Touch Input Options

Aside from the standard touchpad, the Vaio Duo comes with a capacitive multitouch screen that works as well as you’d expect. I was pleasantly surprised to discover that all modern Windows desktop browsers fully support swipe-scrolling and pinch-zooming, so you don’t have to use Internet Explorer 11 in Metro mode for a comfortable tablet browsing session. That said, many Win32 desktop applications (including browsers) feature controls that are too small to hit reliably with your finger.

Here’s where the Vaio Duo’s unusual digitizer pen comes in. This is an active input device that’s powered by its own battery and communicates with a separate high-resolution grid overlaid on the regular touch screen. The tip is pixel-precise and even emulates mouse pointer hovering – by literally hovering the pen a few millimeters above the screen surface. Tapping executes a left click, tapping & holding eventually produces a right click. Alternatively, hold down the smaller button before tapping the screen for a right click, as explained here. Note that by default, the pen’s two physical buttons start up Sony Metro apps. You can disable this feature but sadly not map your own button functions.

I found the pen a worthy addition that makes laptop use of Win32 applications much easier than with just touchscreen and touchpad. I greatly prefer simply pointing to the screen over dragging some barely visible pointer around. Of course the pen can also be used for drawing or handwriting but I’m not very artistically inclined, so I can’t comment on these functions. (See e.g. this review and comments.)

Pitfall: BitLocker vs. SSD

When I cleaned up temporary files after initial setup I was surprised that the operation seemed to take forever – and that the CPU fan was howling loudly. How can SSD access put such a heavy load on the CPU? Turns out that since I had bought the Windows 8.1 Pro upgrade, Sony had “thoughtfully” encrypted the entire Windows partition with Pro’s BitLocker!

This is not necessarily stupid. Like most modern notebooks, the Vaio Duo comes with a Trusted Platform Module that renders a BitLocker-encrypted partition inaccessible when the disk is removed from its paired motherboard. Assuming the computer itself is password-protected, this should offer fairly decent protection of sensitive data against theft.

However, I don’t need this protection as my sensitive data is locked away in a TrueCrypt container – and BitLocker has a significant CPU impact (more samples). Once I had decrypted the disk and disabled BitLocker, the SSD ran as fast and cool as you’d expect.

By the way, disabling BitLocker first required me to finish enabling BitLocker since it wanted to generate my password, and would not allow decryption until encryption was finalized. Moreover, the fact that BitLocker was enabled and waiting for password generation was only indicated by an obscure drive icon. Neither Windows nor Sony communicated this rather important fact in any other way.

Software Setup

Which brings us to the included software. I had checked Sony’s “Clean Start” option to avoid the usual Adobe and McAfee advertising-ware. This option is free, except that you’re forced to get Windows 8.1 Pro rather than the basic variant. This cost €50 more and led to the aforementioned BitLocker mystery, but it did get rid of crapware… or did it?

First, I still found several directories related to McAfee Security Scanner that required manual removal. One even had a proper uninstaller (Program Files\­Sony\­MSS\­uninstall.exe) yet did not appear in the Windows uninstall list. Obviously Sony first dumps this crap on all systems and then removes (some of) it again when “Clean Start” is ordered – rather the wrong way around I’d think, but possibly mandated by the McAfee contract.

Second, Sony installs some other big bundles of dubious software. One is Broadcom’s gigabyte-sized Widcomm Bluetooth application package, providing all sorts of enhanced multimedia functionality. This is on top of the basic Bluetooth connectivity which is always available. As one Sony Vaio user discovered, this package hooks into Apple iTunes – and routinely prevents it from shutting down by hogging the iTunes scripting interface (nobody knows what for). Since I don’t use Bluetooth I was happy to just uninstall this bizarre package.

Then we have WebToGo’s OneClickInternet which seems intended for people who can’t figure out network setup, and of course the inevitable Microsoft Office 365 trial edition. Finally, Sony’s own VAIO Care package is rather impressively bloated. There’s an included updater that always runs – and always produces “503 Service Unavailable” errors when attempting to contact Sony’s update server. Fortunately it can be uninstalled separately. I did not dare to uninstall all Vaio software since it handles emergency system restore and some hardware configuration that (I think) is not available elsewhere.

Software Updates

Updates are a notoriously painful aspect of using Windows. Even though my computer shipped with Windows 8.1, released not half a year ago, it immediately wanted to download 900 MB of updates – just for the operating system! At least I’m on flat-fee WiFi broadband rather than a metered mobile phone connection. Microsoft needs to break this patching addiction if it wants to succeed in a world of mobile consumer devices.

On the other hand, if you want all updates you’ll have to find them on your own. Some hardware components had new driver versions but Microsoft Update did not offer them, and as mentioned above Sony could not contact its update server. Even better, the Sony support website did not recognize the model number of my newly bought Vaio Duo! Bravely I selected some random other number from the same device category, indicating a slightly different configuration. The resulting driver downloads worked fine on my system.

Microsoft Windows 8.1

Intel hardware has advanced much in terms of power efficiency, and so has Microsoft Windows in terms of frugal hardware use. Even a cold boot on the Vaio Duo takes mere seconds – admittedly by expending disk space on kernel hibernation, but that tradeoff is acceptable for a mobile device. Sleep works reliably and wakes up instantly, just like an iPad.

What about the notorious Start screen? While it’s certainly much more usable with touch input, it’s still not very useful on a system that only runs Win32 desktop applications. After a few days with the Start screen I went back to Classic Shell once more. It’s just plain a superior way to organize a complex desktop setup, especially on a system with a precise digitizer stylus.

Windows 8.1 also brought per-monitor DPI scaling, and since I’m often using an external monitor with the Vaio Duo we’ll take a look at that next.

Monitors & DPI Scaling

The Vaio’s built-in 13.3″ screen has a resolution of 1920×1080 pixels which evaluates to about 167 DPI. Not exactly “retina” level but small enough that I had to keep the factory default of 150% DPI scaling, even at a laptop’s short viewing distance. This reduces the effective layout space available to applications quite a bit, but there’s still enough room to get real work done.

Part of my existing desktop system, and now of my docking setup for the Vaio, is a Dell U2711 monitor. This 27″ monster offers 109 DPI at its native resolution of 2560×1440 pixels, but I’m sitting far enough away that I had been using 150% DPI scaling at that resolution as well. Connecting to the Vaio over HDMI worked fine, but the extra pixels turned out to be wasted.

First, the Vaio’s Intel GPU is limited to 1920×1080 pixels, even over HDMI. Some Googling found rumors that older Intel driver versions might have allowed custom HDMI resolutions up to 2560×1440, or that newer monitors would in fact work at that resolution, but I couldn’t go beyond 1080p. This means the display is slightly blurry due to the monitor’s built-in picture scaling, and things are almost comically big at 150% due to an effective 82 DPI when 1080p is stretched over 27 inches.

How about reducing the Windows DPI setting? Windows 8.1 introduced per-monitor DPI scaling which is enabled by default. Now that I had two screens with very different pixel densities I gave this feature a try. It works… as far as Windows is concerned. But all the programs I had nicely centered on the Vaio’s built-in screen were now huddling in the Dell’s upper-left corner. I’d be constantly moving & resizing windows if I took advantage of per-monitor DPI scaling. Besides, Windows seemed to set the Dell to 100% which is rather small at a desktop viewing distance… and of course everything was even blurrier since the monitor still ran at 1080p.

So I disabled this option and went back to gigantic projector-style sizes on the external monitor. At least all window layouts are exactly identical between screens, allowing for seamless switching. I guess I’ll simply buy a smaller 1080p monitor when the Dell dies, assuming I’m not on another computer by then. (Ironically, when I’m playing games on my desktop it’s mostly at 1080p as well, for performance reasons. So the Dell’s impressive 2560×1440 resolution is now perfectly useless.)

Playing Games (Or Not)

My edition of the Sony Vaio Duo 13 is almost decent enough to play modern video games but honestly, you’ll still want to keep a desktop around for that job. Of the three big-budget games I tried, the two that run fine are Pinball FX2 and Rise of Nations. The first even runs on iOS, the second is ten years old… and both require external controllers to play well. Rise of Nations really needs a mouse, being an RTS title. More on Pinball FX2 below.

The third game I tried was Civilization V which does in fact have dedicated touchscreen support on Windows 8. Touch input works and the game does run – but not terribly well. Graphics get choppy even at a middling-low quality level, and the battery drains completely in two hours of play. Firaxis’ notoriously poor quality control eventually asserted itself, too, with units suddenly showing movement paths I had never entered. I abandoned my attempts to play Civ5 at that point, and didn’t try to run Diablo III which definitely requires a mouse anyway. Like all Windows ultrabooks, the Vaio Duo is not a device you’ll want to buy for playing games.

Pinball FX2 is a perfect fit for portrait mode, but that renders the keyboard unusable so you’ll need a gamepad. (There are no touch controls.) Moreover, you’ll want a stand to keep the Vaio upright. In the picture below I’ve hijacked my Logitech keyboard’s iPad stand for that purpose. Finally, you need to rotate the screen to vertical orientation before you launch Pinball FX2, or the game will get terribly confused. But with all these caveats, the Vaio Duo 13 is one mean digital pinball machine!

Vaio Duo Pinball

The Ball Is Out There

Conclusions

Overall I’m very fond of this elegant little machine. Any recommendation needs the caveat that Sony has sold off its computer division, so there may be no more device-specific software or firmware updates. Nevertheless, assuming the Vaio Duo won’t die on me tomorrow I’m quite happy with my purchase. Summing up this lengthy review, here are the points that stood out to me.

  • Fantastic hardware in every respect, beautiful and robust as well as functional and thoughtfully designed. Competitive with Apple devices, and if there’s a better-made Windows computer I’ve yet to see it. All the worse for both customers and Microsoft that Sony is leaving this business.
  • Intel’s hardware and Microsoft’s software are finally efficient enough to provide full desktop capability in a genuinely mobile device, almost light enough to hold in one hand and lasting up to a day without recharging.
  • Even a supposedly “Clean Start” system still comes misconfigured and with defective bloatware, and Microsoft still abuses customer devices as patch landfills. If this doesn’t change the best hardware in the world won’t stop the erosion of Windows consumer marketshare.
  • Touch input is surprisingly useful for desktop applications. Browsers directly support tablet gestures, and the digitizer pen is a great mouse substitute. It’s ironic that touch devices made a breakthrough thanks to Steve Jobs banning pens, yet Windows on touchscreens gets a massive usability boost from high-precision pens.
  • Tablet mode is useless, unless you’re a pinball fan. The Vaio Duo 13 is still too heavy to hold in one hand for extended periods of time. Raising the screen into laptop position is a much more convenient way to achieve a good reading angle, and still puts the touch surface in easy reach. Moreover, the whole point of a Windows machine is to run keyboard-heavy software.
  • As a corollary, Metro mode is utterly useless as well. I constantly use the touchscreen, yet I completely avoid Metro since it offers nothing I want. Touch input is indeed beneficial – but in desktop mode, amazingly enough. Microsoft should have focused on improving touch support and DPI scaling for Win32 applications, rather than tacking on a useless smartphone mode.

edit: As if to provide a demonstration how software is holding back Windows, the Vaio’s fan started howling soon after I had posted this article. Temperatures rose to over 60°C and stayed there. Was the device already defect? No, it’s just Microsoft’s Windows Image Acquisition service that starts up automatically when a camera is plugged in, as I had done to copy the two pictures above. The service then apparently went into a frenzy and kept consuming 30-40% of CPU time, even though it had nothing to do. Manually stopping it immediately dropped CPU load and temperature, and also silenced the fan. How is an ordinary user supposed to cope with that?

Java 8 for C# Programmers

$
0
0

My overview article Java for C# Programmers has been updated for Java SE 8. You can find many links to the new features in the announcement and follow-up post at Oracle’s Java Tutorials Blog. I also once again recommend Cay S. Horstmann’s book, Java SE 8 for the Really Impatient. That said, here’s a quick rundown of the new features I incorporated into my article:

  • Unsigned arithmetic is finally part of the standard library, although there are still no corresponding primitive types.
  • Static & default methods on interfaces remove a big weakness of the single-inheritance paradigm. Unlike C# extension methods, they package default implementations with the interface itself. Moreover, default methods allow extending an interface without breaking existing clients.
  • Lambda expressions are syntactic sugar for function objects, i.e. anonymous inner classes that implement single-method interfaces. That sugar is pretty sweet, though: expressions and parameters are implicitly typed, and you can use method references if you’re just calling one existing method anyway. You get the C# delegate functionality for free, too, since every lambda expression or method reference can be stored in a variable typed to a matching functional interface.
  • Streams & pipelines use lambda expressions to manipulate collection elements, including files or generated data. More importantly, they allow on-demand fetching of new elements and optional automatic parallelization of each pipeline stage.

There were a couple of other noteworthy updates beyond the scope of my language comparison. WebView got new HTML5 features, and JavaScript outside of WebView now runs on Nashorn. Java also got better concurrency support, a sane Date-Time API, and built-in Base64 encoding.

Mastering the Mocca

$
0
0

For the past 2½ years I’ve been brewing coffee with the Technivorm Moccamaster KBT 741, generally accounted the best drip coffee maker in existence. It’s not cheap – I paid €200 for the predecessor of the linked model – but it very thoroughly extracts the coffee grind’s tasty chemicals, thanks to a precise water heater and cleverly arranged multiple drip outlets. Besides, how can you not like a machine with such an awesome name? (The “mocca” here refers to European mokka which simply means strong black coffee, not American mocha which is a variant of hot chocolate.)

I also used the electric burr grinder Krups GVX2 which unfortunately looks much more solid than it is. The cheap plastic body generates enough static electricity to make half the grind stick to its complex surfaces, turning cleaning into a lengthy and messy procedure. Poorly fitted safety pins come loose during operation, so grinding often grinds to a halt until the cover is slammed back down on the machine.

So I went to find a better mill, and I did find the excellent Hario Skerton. While I was at it I also checked out the famous AeroPress coffee maker to see how it would compare to the Moccamaster.

Hario Skerton & AeroPress

Hario Skerton & AeroPress

Hario Skerton (MSCS-2TB)

Turns out the best coffee mills are Japanese. The Hario Skerton, listed as MSCS-2TB on the product page, is an adjustable manual ceramic burr grinder whose looks-to-quality ratio is about the inverse of the miserable Krups GVX2. The unassuming little mill is incredibly solid and made of excellent materials. There’s virtually no static electricity and as few parts and surfaces as possible, resulting in very easy cleaning. The grind receptacle comes with a separate airtight rubber cap, allowing it to double as a storage container for several days of coffee (depending on consumption).

One caveat: While the grind level selector is as robust as the rest of the mill, it features no markings and is not easily adjusted. Once you’ve found the proper fineness the grinder will produce that level reliably, but it’s not practical to change levels on a regular basis. Moreover, the manual explicitly warns against grinding anything but coffee – worth mentioning since some retailers advertise the Hario Skerton as a general-purpose kitchen grinder.

That said, the Hario Skerton is a great inexpensive device to grind your daily coffee. After some adjustment I could easily match the fairly fine grind of my old Krups. Manual operation does take more time but it’s good for your arm muscles! Speaking of price, I paid €38 but also saw it under €30. The Kyocera CM-50 CF appears to be the same model but costs €60 in Germany; I don’t know which company actually manufactures the grinder.

Aerobie AeroPress

The Aerobie AeroPress caused a huge stir among coffee enthusiasts: a cheap, fast, simple way to produce deliciously smooth coffee. Zachary Crockett recounts the history of the AeroPress and its remarkable inventor, Stanford professor Alan Adler, who also designed flying discs and much else besides.

The picture above shows the two tubes that constitute the AeroPress itself. The package also includes filters, a filter holder, an adapter cone for grinding, a spoon and a stirrer. You put grind into the lower tube where it sits on a screwed-on filter, then pour boiling water on top and stir for ten seconds. Now you insert the upper tube which is fitted with an airtight rubber seal, and press slowly to force the water through grounds and filter into the cup below. In less than a minute you have highly concentrated coffee without a trace of bitterness, to be diluted with more hot water or milk as you like.

The amazing thing is, it really works as advertised. The manual does make a few silly claims – I have no intention of washing and reusing the small thin filter papers, and the promised “self-cleaning” is limited to the inside of the water tube while the parts that collect the grounds are just as dirty as you’d expect. Still, cleaning takes little extra time compared to a drip machine.

AeroPress Results

The quality of the produced coffee is indeed remarkable. I don’t think I’ve ever tasted one so totally devoid of bitter and sour notes, thanks to the extremely short steeping time. The same fine grind I use with the Moccamaster works quite well, unlike French presses which require a coarser grind for their coarser filter. As for replacement paper filters, Melitta makes round pieces that are bigger than Aerobie’s but can still be screwed in with a bit more effort, and seem to work just as well.

AeroPress coffee does seem weaker than drip coffee, likely also due to the reduced steeping time. That would explain why the included coffee spoon is veritably gigantic, about the twice the size I’m normally using – the size of “one cup” evidently needs to be adjusted upward, to compensate for the reduced extraction. But that ultimately depends on your taste (or level of caffeine addiction), and AeroPress coffee is certainly not too weak to drink. You can increase extraction by putting more hot water in the tube itself, rather than adding to the cup after brewing. This moves strength and taste a bit towards drip coffee but still results in a very smooth brew.

The manual goes into details on perfect water temperatures reminiscent of Xiao Qiao’s tea ceremony in John Woo’s Red Cliff. I freely admit that I couldn’t be bothered and just took the boiling water off the kettle as usual. The taste of the coffee did not seem to suffer. For €29 or less, the AeroPress is one amazing way to quickly produce reasonable amounts of excellent coffee.

The Last Version of Windows 8

$
0
0

Windows 8.1 Update 1 (with a capital U) has arrived to close off the catastrophic Windows 8 era at Microsoft. If the overhyped Windows 8.1 did little but bring the new Metro environment finally up to release quality, U1 is at least more honest about its fairly minor changes – and comes with the promise of fully backtracking on the original Sinofsky concept in the next major Windows release.

The Update

First, some notes on Update 1 itself. Windows had a batch of critical updates preselected for installation before U1, so that’s what I did. You should also re-check for available updates immediately after installing U1 – in my case there were both critical and optional compatibility patches for U1 already available. And as always after major updates, you’ll want to run Windows Disk Cleanup with the System Files option to get rid of many megabytes (possibly gigabytes) of update debris.

What’s actually in Update 1? Michael Hildebrand and his colleagues at Microsoft have put together a detailed catalogue of all user-visible changes. Additionally, there are “some heavy-lifting internal changes to Windows boot structures and memory/resource awareness and management,” enhancements to IE11 and OneDrive (née SkyDrive), plus a roll-up of all previously released Windows 8.1 updates. I don’t know what exactly these internal changes are but Windows 8 did advance the operating system architecture beneath the ill-conceived Metro experiment, so it’s good to see this process continuing.

The most important visible changes are that non-tablet PCs now boot directly to the desktop, return to the desktop when all apps are closed, and open media files with desktop applications. Microsoft changed these defaults when telemetry showed that Windows 8 was mostly used in desktop mode. (That’s also a good argument for letting MS collect usage metrics, by the way.) And… that’s it, for desktop users. Literally every other change only affects Metro/Modern apps. These now have right-click menus and title bars on mouse systems, they can show the taskbar and also appear there. Metro’s PC Settings have some more options, and the Start screen’s app list has been slightly tweaked.

The eagerly anticipated return of the Start menu and the ability to run Modern apps in desktop windows are not part of this update, but of the next major Windows release. So if you’re using Modern apps with mouse & keyboard, Windows 8.1 Update 1 should make you happy. Desktop users should continue to stick with third-party solutions like Classic Shell and ModernMix while waiting for Windows 9.

The Future

With Update 1 and the upcoming Windows 9, Microsoft has mostly abandoned the original Windows 8 concept where the desktop and its Win32 applications were a legacy environment, included only for backward compatibility. By now this is no surprise. As Paul Thurrott wrote:

While some Windows backers took a wait-and-see approach and openly criticized me for being honest about this, I had found out from internal sources immediately that the product [Windows 8] was doomed from the get-go, feared and ignored by customers, partners and other groups in Microsoft alike.

Microsoft does continue to support and expand the new WinRT API powering Metro apps. As announced at Build 2014, WinRT is finally coming to Windows Phone and Xbox One. Technically, this means you could write WinRT apps that run on everything from phones to desktop windows. But in practice, with an enormous base of popular Win32 products and libraries, desktop users and developers are highly likely to just stick with Win32 (or WF/WPF or even Java) rather than trying to make another proprietary, relatively limited, touch-centric API work on the desktop.

Putting Metro apps in desktop windows is a neat trick that might well gain some traction as a successor to Windows Vista’s desktop gadgets, or as an alternative for porting mobile games to Windows. (Think of Microsoft’s ostentatious friendship with Xamarin in that light – Microsoft needs to convince Xamarin’s many customers to add Windows Store to their list of supported platforms.)

However, this does not change the fact that the market share of Windows Store is tiny. Windows Phone is not huge and won’t support WinRT for some time. Windows RT tablets are a joke, and Windows 8 users are numerous but avoid Metro apps like the plague – see for example Mozilla’s recent decision to abandon Metro Firefox due to total lack of interest among testers. Tellingly, Microsoft itself has already released a well-received Office for iPad while a touch-native Office for WinRT is still in the works – the existing Windows RT version requires its own mini-desktop environment. Even Windows Phone running Android is a perfectly valid option for a “devices & services” company.

In the best case, WinRT apps might become popular on phones and possibly sell some gadgets and cross-platform games on desktops, but I don’t see any system-selling software switching away from Win32. Very few products outside of enterprises switched from Win32 even to equivalent .NET APIs – why would anyone feel compelled to make the much more difficult jump to WinRT?

Win32 Invictus

Microsoft’s backtracking on the original Windows 8 concept is an implicit acknowledgement that Apple was right to keep Mac and iOS strictly separate, one chiefly aimed at content producers and the other at consumers. But once that separation is (re-) established, it’s also implicit that both APIs will go on to live their separate lives practically indefinitely. Since there is now no perspective that Win32 will ever go away, there is no incentive to support anything else for content creation on Windows.

That’s too bad, in a way. There are good reasons why Win32 should be replaced, and Windows 8 did not invalidate them. Thanks to the absence of a curated app store, users are exposed to crapware-infested downloads from fraudulent websites. The average Win32 application’s extremely poor DPI scaling is only becoming more obvious as screen resolutions increase – I need not bother linking to examples, practically every new Windows laptop review features this complaint. Managing a Windows system with its many and constantly increasing sediments of compatibility cruft is challenging even for IT professionals. And as powerful computer hardware begins fitting into tablet form factors, it certainly would be nice if applications worked just as well with touch screens as with mouse & keyboard.

Sadly, WinRT in Windows 8 was not the answer. The Metro environment was too limited and too poorly handled by Microsoft to offer an acceptable desktop replacement, either from a user or a developer perspective. Now that it is shrinking back to a conventional consumer-only, touch-only API the question of moving Win32 forward is once again open.

Compact Horrors of JavaScript

$
0
0

JavaScript is notorious for the nasty surprises it springs on the unwary programmer, especially since it looks like many perfectly sensible languages (and is deceptively named after one). Two compact books present its mind-melting horrors in concentrated form, so as to quickly bring the unfortunate JavaScript neophyte up to speed. Douglas Crockford’s 2008 classic long served as the standard introduction, but in my opinion has been superseded by David Herman’s all-around superior title. As always, both reviews have been added to the Developer Books archive.

Effective JavaScript — David Herman, Addison-Wesley 2012

Herman’s slim but excellent book easily holds its own next to Effective… classics by Joshua Bloch (Java) or Scott Meyers (C++). He expects some JavaScript experience but covers that language’s many perversions so lucidly and thoroughly that I would also recommend his book as an introduction for programmers versed in other “curly braces languages.” All the seemingly absurd patterns in professional JavaScript code, like anonymous functions that are immediately executed, will finally make sense. Read this book if you have to deal with JavaScript in any capacity whatsoever.

JavaScript: The Good Parts — Douglas Crockford, O’Reilly 2008

Until David Herman’s Effective JavaScript, Crockford’s 150-page overview was the standard primer on the language’s unusual capabilities and shocking defects. Unfortunately, Crockford wastes much of his limited space on pointless grammar diagrams, excessive code samples, and overly specific API references. While he does eventually get around to explaining JavaScript’s distinctive features (for good or bad), Herman does a far better job there and has essentially obsoleted Crockford’s classic.

Viewing all 253 articles
Browse latest View live