Wednesday, November 30, 2011

There's Another Wave on the Horizon

Sometimes new technology catches you by surprise, but often you can see it coming if you know where to look (or just get lucky). I first learned about the devices referred to as three-dimensional printers during my time at Clemson in the mid-90s. Basically, they work by building objects up out of 2D slices. Think of how medical imaging such as MRIs work, and then use those as blueprints. Indeed that is exactly what they were researching when I was given the tour: using medical data as inputs to create custom prostheses. Cost and limits in resolution and usable materials have restricted applications, so far. But the idea of a device that can build anything (of a certain size or below) sounds like another one of those Star Trek dreams that is destined for reality. Now I read that costs for 3D printers are now below what laser printers cost in the mid-80s, and there are companies starting to use the devices in a more consumer facing way, including one that will allow you to order custom built ceramic-esque robots. (Incidentally, to dovetail my own recent posts, the software used to customize the robots also serves as an example of using cutting edge of web programming tools too.)

Imagine a world thirty years from now when anything smaller than, say, a soccer ball can be designed and produced in your home. Imagine a world where designs are shared on web pages and/or in design stores that mirror today's mobile application stores. Of course, the real trick will be imagining what the manufacturing sector looks like after that comes to pass...

Monday, November 28, 2011

A Delve into Some Details

My last programming post was a bit of an upbeat ramble in response to a question about the previous post which was a bit of a grumpy gripe. I would like to revisit that gripe one last time. Javascript DOM event handling started me down the road to Crankyville, so allow me to reflect on some of the minute straws my camel was carrying that day.

When learning languages and programming interfaces, I naturally tend to use Google as my easiest reference when poking around with new things. Unfortunately Google is like high school: it confuses popularity with authoritativeness. The number one link when searching on the topic of JavaScript event handlers shows an example that looks essentially like this:
thing.eventHandler = function (eventData) {
  var local1, local2;
  if (!eventData){
    var eventData = window.event;
  }
  //The magic happens.
}
I want to be clear up front, this works and turns out to be perfectly correct, but it still bothered me. Being a professional programmer cultivates a certain amount of anal retentiveness in you. Computer code does exactly what you tell it to, not what you meant it to. Plus, not all languages are strict about checking whether you are making sense or telling it gibberish before going off and potentially wrecking things. In this case, a couple things set my programmer sense off, and they are both related to that if statement above.

First, the reason you have to have the if statement in the first place: WebKit browsers pass event data as a parameter to the event handler function, but Gecko browsers store event data in the window object's event property. The if statement checks to see if there is data in the parameter, and if not attempts to retrieve it from the window property. I have not bothered to look up whether this comes about because of competing standards or too-permissive specifications, but either way the situation has to be handled. And it has to be handled every single time. Compatibility issues of this sort are certainly nothing new, but they do nothing but add code. Picking one style over the other does not break the old code because both cases are covered, so why not declare one the winner and move forward without needing the additional testing. (Note to browser vendors: pick the parameter form. I suspect using the global window.event will interfere with any potential attempts at parallel processing events. Not to mention using globals is just bad form.)

The second issue is much more subtle, and much more just me being a jerk about these sorts of things. The function has a formal parameter called eventData. The function declares a variable eventData. Thus eventData appears to have two different definitions in the same place. Programmers refer to visibility of things as the "scope" of those things. JavaScript has two types of scopes, visible to everything everywhere (global scope) and visible only within the function where it is declared (function scope). When I see two different definitions of something in the same scope it get an ache behind the eyes. Thankfully, it turned out I was not alone in that assessment. I mentioned Douglas Crockford's JSLint tool previously in passing, and indeed it flags eventData in the code above as being re-declared. The question then became: why this is flagged as bad style but not actually an error. JavaScript, I learned, allows the redefinition of things within the same scope; the second definition is essentially ignored and the system uses the thing already given that name. In this code, it is not a problem. In long, complicated functions written by people used to different rules from different languages, I can see this causing subtle defects. And so could Mr. Crockford. Having someone backup your gut feelings is always nice.
thing.eventHandler = function (eventData) {
  var local1, local2;
  if (!eventData){
    eventData = window.event;
  }
  //The magic happens.
}
In the end we get the above code, with one problem solved, and one problem not. The change is quite subtle, but it makes those aches in my head go away. As for the piece not solved, well, what can I say? Learning to cope with things you can't change is just as much a part of being a programmer as in any other aspect of life.

Saturday, November 26, 2011

Music I'm Stuck On

I post music that catches my attention from time to time, but it occurs to me that I have never posted the bands that have grabbed the lion's share of my listening time for the past several years. My problem now is narrowing down the song list to just a couple, because I sincerely love large swaths of both bands' catalogs. These choices are not necessarily representative, but simply ones I listen to over and over.

The Finnish band Nightwish combined power metal with classical influences and operatic female vocals to become one of the primary archetypes of symphonic metal.



The Dutch band Within Temptation came from the gothic metal side of the coin around the same time and again features a stellar female lead.



Monday, November 21, 2011

Sometimes a Little Wrong is Totally All Right

Responding to my previous grousing over the irritations of basic web programming, reader Lee asked, "Do you think these tools/standards developed in relative isolation from each other? Or at least lacked active cross-talk among those developing them?" Interestingly enough, exactly the opposite is the case. HTML, CSS, and JavaScript evolved together, with massive feedback from people actually using them. So how did we get into a situation where people in the industry consider JavaScript broken and CSS a huge mess? I think I have a theory, which comes in two parts.

Theory Part the First: Redefining Explosive Growth
One could make a case that the Internet as a Thing with a capital 'T' has redefined our lives in fundamental ways. People call it "The Information Age," but I don't know that we really stop to think about just how young Web technology is. Tim Berners-Lee created the first HTML specification and browser in 1990. The JavaScript programming language, created by Brendan Eich in less than two weeks, first appeared in Netscape's browser in late 1995. Of the three core Web content technologies only CSS emerged out of a committee, being partially chosen from and partially created out of competing style sheet standards by the World Wide Web Consortium in the late 90s. In less than twenty-five years we have gone from particle physicists cross-referencing papers to "social media."

I'm no history expert, but I suspect that rate of technology adoption is completely unprecedented. Throughout the brief history of the Internet, people have wanted to use it for more than it was intended. Designers used to want their pages to look like magazines and newspapers, now the requirements are beyond that. We want our phone- and tablet-enabled web pages to behave like interactive movies, respond to our desires and even our voices, and do things that used to be the solely in the realm of highly-optimized, stand-alone applications.

Of course, the extreme pace comes with a consequence: the standards have a hard time evolving fast enough to keep up. Even today the big three browser back-ends, Microsoft's Internet Explorer, the Open Source Webkit engine (used in Google's Chrome and Apple's Safari among others), and Mozilla's Gecko, each support different features, often in slightly different ways. Many of these features are already in use, like the canvas demos I have been doing here. Those canvas demos do not work in IE because the canvas's HTML element and JavaScript programming interface are emerging additions to the standard. Under such conditions, it should come as no surprise that some of the t's are not quite crossed and a few i's have not been dotted.

Theory Part the Second: Which do you Prefer, Better or Working?
In addition to the technological problems associated with the explosive growth of the Web, there is a second consequence. There are, at any given time, millions if not billions of pages on the Internet using now outdated technology and tons of users using outdated browser software. This brings in the old debate over preserving backward compatibility versus breaking the old stuff to improve things going forward. Way back last century, Joel Spolsky wrote about this problem as it pertained to everyone's favorite example of a giant computer company, Microsoft. The sheer size of the Internet gives it tremendous technical inertia, making it very difficult indeed to make wholesale changes. For example, XHTML tried to improve the HTML standard, but ultimately became a useful failure that was absorbed back as an option in the broader HTML spec.

The expansion of the role of the Web is a testament to the soundness of the underlying designs, but nothing can be improved forever. Inevitably old technology accretes changes until it reaches a point of no return where it becomes more costly to modify/repair the existing system than to replace it. The trick is figuring exactly when you have hit that point. Google is hedging its bets with its Dart programming language while continuing to support improvements to JavaScript. Microsoft is betting that HTML5/CSS3/JavaScript are the way forward by using them as the UI tools for Windows 8 (in the process invoking the same dilemma with developers using their existing APIs). The W3C itself has pretty much given up, declaring HTML a "living standard" which fights to converge working implementations rather than hand down the specification to the browser vendors and web developers.

Conclusion?
Caught between rapid innovation and massive uptake, the tools that were chosen were the tools that were available. HTML, JavaScript, and CSS were there for us when we needed them, and there are too many fields being plowed to allow the workhorses back in the barn. Web programming was cursed by astounding success.

And what a success it has been. This stuff may be the most visible raw innovation ever. Anyone with a computer can open a text editor and write code to run in a browser. Thousands of people around the world have changed their lives by doing just that, and it has only just begun. The browser is a development environment that exists in literally every computer shipped today. As more and more companies converge on the Web, the standards will continue to improve and the standards will continue to be left behind. And the programmers? Well, we will continue to be faced with stupid edge cases, ugly work-arounds, odd incompatibilities, and the abiding satisfaction we get when things actually work.

Post Script
Lee also asked about resources I have used learning what little I know of web programming, so I'd like to add a bit of a link-dump here at the end.

Of course, I'm willing to attempt to answer questions as well, though if you go through all of that, you will probably know a fair bit more on the subject than I do.

Sunday, November 20, 2011

Quote of the Moment

"Hope is a decision we make, a choice to believe that God can take the adversity, the disappointment, the heartache, and the pain of our journeys and use these to accomplish his purposes."  —Adam Hamilton, The Journey: Walking the Road to Bethlehem.

Thursday, November 17, 2011

Clicking and Wrapping

Your browser does not support the canvas element, so this example will not function.

Another day, another JavaScript/canvas experiment.  This time around it's all about clicking squares with your mouse. Go ahead, click the squares. In general, this tiny example of hit detection using mouse events would not be worth even posting, much less talking about, but it illustrates some of my issues with the development world moving in the direction of HTML5/CSS3/JavaScript interfaces for everything.

I will leave the debate over the worthiness of JavaScript to people more knowledgeable than myself. Instead, I will gripe about using DOM events. In my little example here, if I want to let the canvas element handle mouse clicks, I just attach a mouse click event handler to it and I'm off to the races, right? Naturally, it isn't that simple. First, there is no standard way for the event's information to be passed into the event handling function.  Mozilla does it one way, Webkit another. Once you work your way past that with some boilerplate code that has to be repeated every time, you get to the bit that really made me tired. The event data for the mouse click, though attached to a particular element of the page, does not actually have any relationship to the element. Click a mouse on the canvas, and you have your choice of receiving coordinates for that event in  relation to the browser window or the browser client area, neither of which is actually useful for figuring out where in the canvas the mouse click happened. The example I got working walks from the canvas element up the DOM tree until it finds the body element, adding up the coordinate offsets as it goes, then adds the browser window padding to get the location of the canvas in window coordinates, which in turn are used to calculate the click position relative to the canvas's coordinates. Sigh.

Presumably, this lack of local coordinates has something to do with the event bubbling mechanism in the DOM. Or perhaps it is just laziness in the implementation that forces the programmer to manually calculate an element-relative mouse position rather than having the library do it. My ignorance about why this behavior would be desirable is an ever-present third possibility. In any case, the DOM and CSS are filled with these little oddities and incompatibilities.

People's answer for such irritants remains the same everywhere I've seen it: create utility functions/libraries/frameworks that overlay the standard interfaces. To be clear, I don't mean extensions that add functionality, or even things that simplify for specific purposes, but rather things that seek to provide exactly the same functionality in an "easier" or "better" way. Generally, I take a proliferation of such "utility" wrappers to mean that folks are hiding problems that might be better off fixed in the standards themselves.

Most of the time, programmers or even groups of programmers do not have the capacity to update the standards. We can't all be part of the W3C, for instance, and nothing would get done if we were. That said, judging by the number of available wrappers I'm seeing in my brush with web development, I suspect the debate over the future path (or death) of JavaScript may be even more important than people realize.

Tuesday, November 15, 2011

Keeps on Ticking

My experimentation with HTML5's canvas element and Javascript continues with this analog-style clock. I don't have much to say about the code save that I treated the canvas as a plain raster device and there may be easier, or at least different, ways to generate the clock face using rotations. If there is anything about it you would like to know, feel free to ask in the comments. For another analog clock example that does not use canvas at all, see the timing events page of the W3C's Javascript tutorial.

Your browser does not support the canvas element. This post will not display correctly

Wednesday, November 9, 2011

Quote of the Moment

The woods would be very silent if no birds sang except those that sang best.  --Henry Van Dyke

Tuesday, November 8, 2011

LEDs and Solar Power, like Chocolate and Peanutbutter

ArsTechnica's look at the LED lighting landscape includes some nice tidbits on the current state of florescent lighting as well and is well worth a read, if only because it gives a glimpse into how the next generation might be decorating their houses.  Judging by the product lists available at the big box home improvement stores, we still aren't quite there yet, but the selection is getting better slowly.  LED lighting just makes sense, even if it does cost more at the moment.

The Navy's recent experiments into more energy efficient forward base designs noted the combination of LED lighting and integrated solar cells dramatically reduced both the power footprint and soldiers' battery use.  The general upward trend of fuel costs and the global economic slowing appear to be combining to encourage alternative energy sources.  The business opportunities surrounding alternative power generation seem to be taking hold, enough so that Slashdot featured a roundup of sources demonstrating a quickly growing US solar power industry.  Even Google wants to get into the game, by owning and subsidizing consumer solar installations, essentially turning themselves into a distributed utility.  Politics play a part as well, and several potential examples of shady political dealings have not stopped the government from continuing to push solar subsidies.

Lower power draw lighting solutions fit naturally into the futurist vision of ubiquitous photovoltaic power generation, and slowly economic and political forces are beginning to line up to make the changes happen.