Tag Archives: javascript

Generating Noise Textures

My current play-with-random-web-tech side-project is is a solitaire game. Well actually it’s a framework for implementing solitaire games because I’m a programmer and a side project is the right place to exercise my desires for needless abstraction. I want a nice felt-looking background texture and the way to do that is you add some noise to an image. Adding some subtle noise to an image is a really common approach to achieving a nice visual aesthetic – as I understand it – I mentioned I’m a programmer.

My project ships as a single HTML file. I’m developing it with in TypeScript with LESS styling but then I can bundle it all up as a single file to scp to a server. I’m not using any external images – Unicode includes plenty of handy symbols and the only other one I needed (a Mahjong bamboo symbol) was only a tiny snippet of SVG. Including a noise texture as an external image or inline as a data: URI is kind of gross. Noise textures by definition don’t compress well.

I was able to build something pretty nice by combining HTML Canvas’ raw pixel data access and Web Crypto’s getRandomValues(). The window.crypto.getRandomValues() call fills a typed array with random data. Actually it’s generated by a PRNG seeded with a little entropy so it’s fast and won’t waste all your hard earned entropy. Canvas’  createImageData() and putImageData() allow you to work with canvas pixels as typed arrays.

The only catch is that getRandomValues() only fills 64k of data. Canvas image data is 32bpp RGBA so all you can generate trivially (without repeated calls to getRandomValues()) is a 128×128 texture. When I use this subtly I don’t notice the repetition.

The completed example is on JSFiddle: https://jsfiddle.net/ianloic/2uzhvg8h/

Simply logging JavaScript calls.

When debugging complicated JavaScript one thing I find myself constantly doing is using console.log() to print out what functions are being called in what order. JavaScript is single-threaded and event driven so it’s often not entirely clear what functions will be called in what order.

Traditionally I’ve done something like this:

function foo(bar) {
  console.log('foo('+bar+')');
}

but last night I came up with something better. It’s probably not completely portable but it seems to work fine in recent Chrome / Safari / Firefox, which is really all I’m going to be using for debugging anyway:

function logCall() {
  console.log(logCall.caller.name + '(' +
    Array.prototype.slice.call(logCall.caller.arguments)
    .map(JSON.stringify).join(', ') + 
    ')');
}

Just add logCall() to the start of functions and the function call (including serialized arguments) will be logged to the console. Easy and foolproof.

Detecting advanced CSS features

I decided to do some fancy typography in my résumé, but I wanted to have a sane fallback for browsers that don’t support the latest features. I have my name rotated 90 degrees using CSS3 transforms, but simply applying the transform: rotate(-90deg) CSS property isn’t enough to get the effect I want. To make it look right I also need to measure the text (since its dimensions will change depending on the viewer’s fonts) and absolutely position it based on that. Standard CSS fallback (where unrecognized rules are ignored) doesn’t give a good result. At all.

After a bunch of fiddling and cross-browser testing I found that testing the existence (!==undefined) of properties on element.style worked well. So I put all of my rotation rules in a CSS class that’s applied only if the style properties exist.

<style type="text/css">
  h1.rotated {
    -webkit-transform-origin: left bottom;
    -moz-transform-origin: left bottom;
    -o-transform-origin: left bottom;
    transform-origin: left bottom;

    -webkit-transform: rotate(-90deg);
    -moz-transform: rotate(-90deg);
    -o-transform: rotate(-90deg);
    transform: rotate(-90deg);

    position: absolute;
    left: 0px;
    top: 0px;
    margin-top: 16px;
  }
</style>
<script type="text/javascript">
    // rotate the title, if that's even possible
    var h1 = document.getElementsByTagName('h1')[0];
    var s = document.body.style;

    // test for the presence of CSS3 transform properties
    if (s.transform !== undefined ||
        s.WebkitTransform !== undefined ||
        s.MozTransform !== undefined ||
        s.OTransform !== undefined) {
      // move
      h1.setAttribute('style',
          'top:' + ( h1.offsetWidth - h1.offsetHeight) + 'px');
      // and rotate
      h1.setAttribute('class', 'rotated');
    }
</script>

Of course, if I could just check for transform: support rather than -webkit-transform:, -moz-transform: and -o-transform: but we’re not there yet. Anyway, based on browsershots.org it seems to do the right thing on every known browser. It shows rotated text in very recent Webkit and Gecko browsers, and in Opera nightlies, and unrotated, correctly text everywhere else.

jQuery selector escaping

jQuery selectors are powerful and simple to use, until you have attribute
values (including ids and classes) that have funny characters. I wrote a plugin
adds a simple function $.escape that will escape any special characters.
For example if I have a string s that contains an id but I’m not sure that
all of the characters are safe I can use $('#'+$.escape(s)) to find the
element with that id. If I want to find all of the links to
http://ianloic.com/ I can do $('a[href='+$.escape('http://ianloic.com/')+']')

The first release is here. I don’t know that there will be any more releases – this is a pretty damn simple plugin.

A brighter future for mobile applications?

Since the Chrome OS announcement the other day I’ve been thinking more about what a world with rich enough web APIs to support all general purpose applications might look like. I’m not sure that it’ll happen, but it sounds like Google is putting their weight behind it and they’ve been successful in the past at moving our industry in new directions (remember the world before GMail and Google Maps?).

A richer set of standard web APIs might form the basis for a cross-manufacturer mobile platform. The Palm WebOS stack already kind of looks like Chrome OS (though with local HTML+JS apps rather than remote ones) and the original iPhone application model was exactly what Chrome OS proposes. The limitations that forced Apple to create a native developer platform are exactly the ones that Chrome OS plans to address.

Of course Google’s own mobile platform is decidedly non-web and Apple’s much larger volume of applications discourage it from supporting standard APIs. The handset manufacturers, OS developers and carriers are all making a ton of money selling applications in a model that’s reminiscent of pre-web software models. The only real winners from a move to a web model for mobile applications would be the users.

Not solving the wrong problem

I like a great deal of what Google does for the open web. They sponsor standards work, they are working on an open source browser, they are building documentation on the state of the web for web developers. It’s all really great. Today they posted what they called A Proposal For Making AJAX Crawlable. It seems like a great idea. More and more of the web isn’t reached by users clicking on a conventional <a href=”http://… link but by executing JavaScript that dynamically loads content off of the server. It’s somewhere between really hard and impossible for web crawlers to fully and correctly index sites that work that way without the sites’ developers taking crawlers into account.

Google’s proposal is to define a convention for URLs that contain state information in the anchor and to define a convention for retrieving the canonical, indexable contents of the an URL with such an anchor tag. First let me dismiss the suggestion that you make a headless browser available over HTTP to render your AJAX pages to HTML out of hand. If it’s so easy for HtmlUnit to render your AJAX to HTML, surely Google can do it. And basically offering HtmlUnit as a web service on your server doesn’t sound that secure or scalable to me.

The bigger question is that if your solution requires the server to be able to serve the correct HTML for any state, would you come up with the same solution as Google? There is a simple, straight-forward solution that works today and is used on sites all over the internet. If the content you serve includes the static, non AJAX URLs in anchor HREFs but uses JS click handlers to do AJAX loads then crawlers can scrape all of your pages, users of modern browsers get the full shiny experience and users on old mobile browsers that don’t support JS get to work for free!

To do this you can either make your AJAX templates include onclick handlers or you can write a simple piece of JS to do the right thing when any link is clicked on. A contrived example using jQuery might look like:

      $(function(event) {
        $('body').click(function(event) {
          var href = $(event.target).attr('href');
          // don't try to AJAX absolute URLs
          if (href.match('https?://')) return;
          // don't let the normal browser navigation operate
          event.preventDefault()
          // based on event.target.href, decide what AJAX URL to load.
          $('#ajaxframe').load('/load-fragment', {path: href});
          // update the URL bar
          document.location.hash=href;
        });
      });

This will intercept clicks on relative anchor tags and let your page JS do its AJAX magic. It doesn’t require special conventions. If you build your site this way you’ll probably find that the state that is in your URL fragments is a the relative URL for the page on your site. So http://www.example.com/random/page and http://www.example.com/#/random/page have the same meaning. That turns out to be a pretty good convention. After all, aren’t our URLs supposed to refer to resources anyway?

Three Months of ActionScript

I’ve been working largely in ActionScript 3 for the past three months. After spending four years working primarily in JavaScript I didn’t expect to encounter too many problems with her cousin. That expectation was pretty well borne out.

I was particularly excited at the chance to play with ActionScript’s optional strict typing since that has been under consideration for future versions of JavaScript / ECMAScript. AS3’s strict typing hasn’t felt optional to me because I’ve been largely using the Flex Builder IDE and it excitedly warns me every time I fail to specify a type on a variable or function signature, so I put them everywhere. Instead of implementing encapsulation using conventions like I do in JS I use AS3’s Interfaces. This felt good at first but I quickly realized that I couldn’t do the kinds of tricks that I’d become accustomed to in JS. I can’t create a typed object in a function and return it without defining a class for it in its own file. This has made the higher level structure of my software far closer to Java than to JavaScript.

On the other hand, inside my functions I’m pretty much writing JavaScript. All of the standard JS built-in objects (Object, Array, Math, etc) behave like modern JS implementations. I use anonymous functions to manipulate and filter arrays. I use Objects and Arrays as my primary data structures.

At this point I’m pretty used to AS3. I know how to design my code and I can be pretty productive, not needing to rewrite everything too many times. I’d like to see Adobe mix the Java-style classes and interfaces up with the JavaScript habit of declaring anonymous functions and objects as you go. I’d be thrilled if I could write something like:

var x : IMyInterface = new IMyInterface {
method: function() : void { ... },
field: 42
};

I see that they’re extending the language to support limited generics support so hopefully that won’t be the end of it. I’m looking forward to experimenting more on the platform.

Twitter Translation

My friend Britt mentioned today that he was about to launch twitter.jp. How exciting! But I don’t understand Japanese. If only I could easily translate all those tweets in languages I don’t understand.

I played around with Google’s new AJAX Translation API before and I wondered how hard it would be to use that from a GreaseMonkey script. The answer: hard. I’m not sure what the exact problem was but every way I tried to include Google’s APIs into the pages I was manipulating, including creating my own iframe and using document.write failed. In the end I used a static proxy html file (hosted in one of my Amazon S3 buckets for cheap efficiency) with some sneaky cross-site communication (the request goes over in location.hash, the response comes back in window.name).

My script is now up on userscripts.org: twitlator.user.js

To use it simply go to a page listing a bunch of tweets, like the public timeline and find a tweet in a language you don’t under stand. For example:

Click the translate! link and several moments later you’ve got:

Yep. People aren’t posting anything interesting in Japanese either.

I’m pretty sure my script will only work on Firefox 3. I’m using getElementsByClassName which I think wasn’t introduced until Firefox 3. Why aren’t you running it already?

Google AJAX APIs outside the browser

Google just announced their new Language API this morning. Unfortunately their API is another one of their AJAX APIs – that are designed to be used from JavaScript in web pages. These APIs are pretty cool for building client-side web applications – I used their AJAX Feeds API in my home page – but I had some ideas for server software that could use a translation API.

I remembered John Resig’s hack from a few months back, getting enough of the DOM in Rhino to run some of the jQuery unit tests. I pulled that down, wrote the bits of DOM that were missing and now I’ve got a Java and JavaScript environment for accessing Google’s AJAX APIs. Apart creating stubs for some methods that are called the main functionality I had to implement was turning Google’s APIs’ asynchronous script loading into the Rhino shell’s load(url) calls. They use <script src="... and document.createElement("script"), but both are pretty easy to catch. The upshot of this is that everything is synchronous. This subtly changes a lot of the interface. For example, my Language API example looks like this:

load('ajax-environment.js');
load('http://www.google.com/jsapi');
google.load('language', '1');
google.language.translate("Hello world", "en", "es", 
  function(result) { 
    print(result.translation);
  });

it of course prints: Hola mundo.

I’ve put the source up on github. Have a play, tell me what you think.