Wednesday, June 18, 2008

Installing Synergy For Linux and Windows

posted by Jonah Dempcy

Synergy is a great app that allows you to control your Linux and Windows computers via a single keyboard and mouse. You can plug in the keyboard/mouse to either Linux or Windows machines and fluidly switch between computers just as easily as you would switch between monitors in a dual-monitor set up.

Here is a description and image from the Synergy site which demonstrates this functionality:

In this example, the user is moving the mouse from left to right. When the cursor reaches the right edge of the left screen it jumps instantly to the left edge of the right screen.

You can arrange screens side-by-side, above and below one another, or any combination. You can even have a screen jump to the opposite edge of itself. Synergy also understands multiple screens attached to the same computer.

I've been using it for a couple of years now and I find it an invaluable addition to the arsenal of any developer who works in both Windows and Linux. I built a new computer over the weekend and finished installing Ubuntu 8.08 Gutsy Gibbon tonight. One of the first things I did was install Synergy. Previously, I set it up on Red Hat Enterprise, but either way, the set up is pretty straightforward.

Let's get started installing Synergy, shall we? The Windows set up is easier because it has a GUI. In Windows, go to the Synergy SourceForge page to download the latest release (version 1.3.1 as of this writing). Here's the download link if you don't want to be bothered with navigating the SourceForge page:

Once it's downloaded, install it and open the synergy.exe file to launch the program.

I prefer to set Windows as the client and Linux as the slave, although it was recommended in a forum post to do the opposite in Windows Vista. Regardless, I haven't had issues with Vista or XP in either configuration.

For now, let's assume your keyboard and mouse is hooked into the Linux box and you want to make Windows the slave (client). To do this, simply click the radio button next to Use another computer's shared keyboard and mouse (client). Then, enter the Linux computer's host name next to Other Computer's Host Name. For me, this is 'ubuntu' but you may have been more creative with your host name.

We're not quite ready to test it yet, so leave this window open and go back to Linux for a moment. In Ubuntu, install Synergy by typing:

sudo apt-get install synergy

If you prefer doing things in the console, then you can manually create the config files. Otherwise, you can use a GUI called QuickSynergy to get up and running. Here's how to do either way:

The manual way might take a little longer, or not, depending on how good you are with command line interfaces. To install it manually, first you must create a configuration file called synergy.conf that looks like this:

    section: screens
      screen1:
      screen2:
   end
   section: links
      screen1:
          right = screen2
      screen2:
          left = screen1
   end

You can place the config file in /etc/ or /usr/local/etc/ (whichever you prefer). Just make sure it is somewhere in the environment PATH for convenience's sake.

For me, this file says:

    section: screens
      ubuntu:
      laptop:
   end
   section: links
      screen1:
          right = laptop
 screen2:
          left = ubuntu
   end

My laptop is, of course, named 'laptop' -- again, feel free to use more imaginative names (as long as it is actually the name of the computer). It isn't necessary to use the name of the computer as the name of the screen but it requires extra configuration otherwise. (See the official documentation for more information on this).

Now that you've created the file, you're ready to launch Synergy. Give it a try with this line:

    synergys -f --config synergy.conf

Assuming it starts correctly, jump back to Windows and click 'Test' in Synergy there. It should say that it connected OK and everything is fine and dandy. If that's the case, then just click 'Start' and you're done. If not, visit the docs page and scroll towards the bottom to troubleshoot the issue.

If you prefer graphically configuring Synergy, that's an option, too. I guess I should have put the GUI solution first for us lazy developers, but it's good to be familiar with the non-graphical way anyway. I know that I managed to mess up my config file using the GUI and had to dive in to it regardless.

That being said, the GUI (called QuickSynergy) is very straightforward and easy to use. To install QuickSynergy, type the following in a Linux terminal:

sudo apt-get install quicksynergy

Once it's installed, you can type quicksynergy to launch it (add & at the end if you want to retain your terminal window) and a window will popup that allows you to configure Synergy, either as host or client.

The default tab which opens is for host, and all you have to do is enter the correct screen names and then start it. For me, my Linux desktop (named 'ubuntu') is on the left and my Vista laptop (named 'laptop') is on the right, so I just made sure that the fields to the left and right of the computer image said 'ubuntu' and 'laptop', respectively.

QuickSynergy is theoretically easier to use than the command line way detailed above, so I won't go into much depth here. If you get stuck with QuickSynergy, either try the command line way, or visit the QuickSynergy SourceForge site to see screenshots and example configurations.

Extra Features:

  • Synergy has the ability to auto-sync starting and stopping of screensavers
  • You can copy and paste between Linux and Windows (this is a huge time-saver!)
  • Easily lock the mouse/keyboard to the current screen by toggling scroll lock (assignable to any other key)

Labels:

Tuesday, June 17, 2008

What's New in MooTools 1.2

posted by Jonah Dempcy

I'm happy to announce that MooTools (Wikipedia link) has released version 1.2 of their excellent JavaScript library. MooTools, which stands for 'My Object Oriented Tools', was developed in 2006 by Valerio Proietti and his colleagues. It evolved out of Moo.fx, a lightweight effects library which plugged into the Prototype framework. It was similar, although smaller (and in my opinion, better) than the scriptaculous library. Moo.fx has now been fully integrated into the MooTools library and is not being developed further at this time.

Even before MooTools' 1.0 release on January 29, 2007, it had garnered quite a bit of buzz. There were even cheat sheets created for the beta MooTools library.

Thus, it is with great excitement that I announce a new version of this marvelous framework, with a great deal of improvements and additions to the codebase. I've been using 1.2 beta for quite a while and I think the official release is mostly a bug fix of the beta, so if you've been following this blog, chances are you've already been exposed to some of the new features in 1.2. Regardless, here's a full list of features and enhancements you'll find in the new release:

  • Swiff, support for working with Flash SWF files, similar to the swfobject library
  • Element storage allows you to store data in custom properties on HTML elements without leaking memory in IE
  • Overhaul of Fx classes with many improvements, including creating a Tween class to create reusable animation tweens
  • Overhauled Ajax requests; renamed Ajax class to Request , with JSON and HTML subclasses (for easily handling their respective data formats as Ajax responses)
  • Element.Dimensions - makes it a breeze to get width, height, x/y coordinates of an element (either relative to document or to positioning context) and scroll height/width
  • Created a Browser class to store browser, platform and feature information (e.g. whether the browser supports XPath or not). Before, browser info was stored on the window object. Also, this release renamed the properties from browser names to rendering engine names, e.g. trident4 instead of ie6.
  • In addition to the changes to the API and codebase, the following changes occured as well:

  • MooTools now adheres to behavior driven development using specs
  • The Hash Object - with get, put, each, some and a whole lot of other methods for manipulating data in a hash
  • MooTools developed using Git instead of Subversion now - this will only affect you if you're used to grabbing code from svn (or if you're a contributor!)
  • MooTools uses Lighthouse instead of Trac for bug tracking now

In light of all these improvements to an already excellent library, I think it's apparent that MooTools is really growing up and coming into its own. It's a force to be reckoned with and certainly a heavyweight contender against Prototype, jQuery, YUI and others.

I hope you've enjoyed this brief overview of some of the new features in MooTools 1.2. Now get out there and start coding!

Related links:
Docs and Demos
Compatibility
Git Repositories
Bug Tracking
MooTools User Groups

Labels: ,

Sunday, June 15, 2008

Javascript image rotator viewer

posted by Alex Grande
See a demo at http://www.alexgrande.com
It is the images on the top right.
I wrote this in Object Oriented format so you can use it again again on page.
Javascript:
var ImageGallery = new Function();

ImageGallery.prototype = {
 initialize: function(mainImage, listWrapper){
  // large image that is shown
  this.mainImage = document.getElementById(mainImage);
  // a list of all the anchors for the thumbnails. Must be a tags for graceful degradation
  this.thumbnails = document.getElementById(listWrapper).getElementsByTagName("a");
  // 0 image is shown already so the first one we want to switch to is the next image or 1
  this.i = 1;
  // this is a work around to allow calling this within nested functions
  var Scope = this;
  // Here starts the rotating of the image by first focusing the thumbnail, then switch the primary image
  this.start = setInterval(function(){
   Scope.focusCall();
  // Here we choose 5 seconds in between each image change. You may want to change this.
  }, 5000);
  // This lets the browser know to do something if one of the thumbnails is clicked
  this.clickEvent();
 },
 
 // This stops the rotation of the thumbnails. We do that if the user clicks one of them
 stop: function(){
  if (this.start) 
   clearInterval(this.start);
 },
 
 // When an thumbnail loses focus we must use this css class now
 resetBorderColor: function(reset){
  reset.getElementsByTagName("img")[0].className="thumbnailDefault";
 },
 
 // When the thumbnail gains focus we most give it the corresponding styles
 focusBorderColor: function(focused){
  focused.getElementsByTagName("img")[0].className="thumbnailFocus";
 },
 
 
 // Here we grab the href of the a tags and make their path be the path of the current image
 imageRotator: function(){
  this.mainImage.src = this.thumbnails[this.i].href;
  this.previousImage = this.thumbnails[this.i];
  this.i++;
  // This closes the loop for the rotation
  if (this.i == this.thumbnails.length) 
   this.i = 0;
 },
 
 // We focus the thumbnail
 focusCall: function(){
  // reset the last image that was shown
  if (typeof this.previousImage != 'undefined') 
   this.resetBorderColor(this.previousImage);
  // Remember the newer one
  this.currentImage = this.thumbnails[this.i];
  // Give the newer image some focus
  this.focusBorderColor(this.currentImage);
  var Scope = this;
  // Les that have image rotate to the new one 300 miliseconds after the thumbnails get the css focus
  window.setTimeout(function(){
   Scope.imageRotator()
  // You may want to change this number 
  }, 300);
  
 },
 
 // This is what happens when you click the thumbnails
 clickEvent: function(){
  var Scope = this;
  for (k = 0; k < this.thumbnails.length; k++) {
   this.thumbnails[k].onclick = function(){
    if (typeof Scope.previousImage != 'undefined') 
     Scope.resetBorderColor(Scope.previousImage);
    Scope.focusBorderColor(this);
    // Stop the rotation 
    Scope.stop();
    // This is where the switching happens for the click
    Scope.mainImage.src = this.href;
    Scope.previousImage = this;
    // Make sure to not allow default behavior of the a tag
    return false;
   }
  }
 }
 
}


This is to load it on the window object and create an instance of the viewer
var imageGallery1 = new ImageGallery();

// Not sure where I got this.. I didn't write this but it allows you to load multiple functions on the window.onload.
function addLoadEvent(func) {
 var oldonload = window.onload;
 if (typeof window.onload != "function") {
  window.onload = func;
 } else {
  window.onload = function() {
   oldonload();
   func();
  }
 }
}

var onLoad = function() {
 imageGallery1.initialize("index_largepic_display", "index_thumbnail_display");
}
 

addLoadEvent(onLoad);


For the version on my homepage alexgrande.com here is the CSS and HTML. The CSS is up to you but I suggest following a similar mode with the HTML

HTML:
<div id="gallery">   
     <ul id="index_thumbnail_display">
        <li> 
   <a href="images/index/fernlarge.jpg" >
  <img class="thumbnailDefault" src="images/index/fernthumb.jpg" alt="A picture of a fern just as it is unraveling in front of a log." />
   </a> 
 </li>

        <li> 
   <a id="partythumb" href="images/index/partylarge.jpg" >
      <img class="thumbnailDefault" src="images/index/partythumb.jpg" alt="Downtown Seattle at night." />
   </a>
 </li>

        <li> 
   <a id="alexthumb" href="images/index/alexlarge.jpg" >
      <img class="thumbnailDefault" src="images/index/alexthumb.jpg" alt="I'm on a laptop at night in a field on using the internet via hacking a telephone box...legally." />
   </a> 
 </li>
        <li> 
   <a href="images/index/trucks.jpg" >
     <img class="thumbnailDefault" alt="Trucks lined up in Sodo in Seattle at night." src="images/index/trucksthumb.jpg" />
   </a> 
        </li>
      </ul>
      <img id="index_largepic_display" src="images/index/fernlarge.jpg" alt="A picture of a fern just as it is unraveling in front of a log." /> 
</div>


CSS:
div#gallery {
 position:relative; 
 float: left;
 overflow: hidden;
 width: 65%;
}

img#index_image_display {
 border:1px black solid;
 margin-bottom:20px;
}

ul#index_thumbnail_display {
 list-style-type:none;
 position:absolute; 
 top: 0;
 left: 0;
 margin-top: 15px;
}

ul#index_thumbnail_display li a {
 padding:10px;
}

.thumbnailDefault {border: 1px solid gray !important;}

.thumbnailFocus {border: 1px solid red !important;}

Thursday, June 12, 2008

Saving State: What To Do When Users Leave

posted by Jonah Dempcy

In this era of rich JavaScript applications, so much focus is given to the features of the application that one crucial element is often overlooked: What happens when the user leaves the page? We take it for granted that pages will look the same when we leave and return, but a new question merges for sites using rich JavaScript interaction: If the user leaves and returns to the page, will the application state be preserved?

The effects of losing application state can range from minor annoyances like losing what page you're on, to all-out frustration after losing a carefully-typed message because you accidentally triggered the browser's back button. (It's easier than you think-- hitting backspace when the document is in focus triggers the back button in most browsers). Couple this with the fact that some users may expect pages to save form data, because of their prior experience to that effect, and it becomes apparent that a robust strategy for preserving application state must be devised.

Browsers automatically save data entered into form fields, but all JavaScript variables are lost when the user leaves the page. Furthermore, any form fields that were created by JavaScript will also be lost. So, for all but the most simple applications, JavaScript must have a strategy for saving state that deals with these limitations.

Some sites like thesixtyone.com reside entirely on a single page and capture users' back button clicks with named anchors. But, try writing a wall post on Facebook and you'll find that it does not save the post if you leave the page. Accidentally pressing backspace is all too easy in cases like this where typing is involved, which is why sites like Gmail and Blogger warn users that they will lose data before leaving the page.

How To Warn Users Before Leaving the Page

One way you can do this is by assigning a confirmation message to the return value of the window.onbeforeunload event handler. The user will be presented with two choices, OK and Cancel, along with a custom message of your choosing.

In the following example, we regsiter an anonymous function as the event handler for window.onbeforeunload, and add our own custom message:

Using window.onbeforeunload to confirm if a user wants to leave the page (example 1)
window.onbeforeunload = function() { 
  return "You will lose any unsaved information."; 
};

The browser displays your custom message, given in the return statement of the onbeforeunload event handler, along with the browser default message. In Firefox, the result is:

Are you sure you want to navigate away from this page?
You will lose any unsaved information.
Press OK to continue, or Cancel to stay on the current page.

Retaining Data When Users Do Leave the Page

You may opt to silently save the user's data when they leave the page. This may give a better user experience since they are not confronted with a choice, and their data is saved automatically.

This is one of the times where Ajax comes in handy. However, there are also other ways to do this without using Ajax, such as cleverly storing information in named anchors or hidden form fields. We'll examine each of these practices in more depth, but suffice it to say that the hidden form fields approach works better for conventional websites that are spread across many pages, whereas storing data in named anchors is better for single-page, pure JavaScript applications.

It turns out that while you could (and should) save state to the server using Ajax, for some cases you will want to avoid Ajax altogether and use a simpler, clientside-only model.

Using Hidden Form Fields to Save State

As mentioned, all JavaScript objects are lost when the user leaves or refreshes the page. But, browsers will retain data in form fields, provided that the form elements were not generated using JavaScript. Given this limitation, it is necessary to save JavaScript variables (or the serialized JSON strings of such objects) to hidden form fields if they need to be retained.

Here is a basic example showing how variables can be stored to hidden form fields and restored on page load:

    
Saving data in hidden form fields (example 2)
// The variable userData is some necessary information we need from the user.
// The first time the user visits the page, they must enter this data manually.
// But, when leaving and returning to the page (or refreshing the page), we'll check
// if they already entered the data, and if so, restore it from a hidden form field.

var userData;

// Register event handlers
window.onload = function() {
 restoreState();
 if (!userData) {
    userData = prompt('Please enter the data to save', 'test');
 }
 document.write("userData: " + userData);
}

window.onbeforeunload = saveState;

// This function is called onbeforeunload and writes the userData to the hidden form field
function saveState() {
   document.getElementById('saved-data').value = userData;
}

// This function is called onload and checks if any data is present in the hidden form field
// If so, it defines userData to be the saved data
function restoreState() {
   var savedData = document.getElementById('saved-data').value;
   if (savedData != "") {
        userData = savedData;
   }               
}

In the above example, all we're saving is one string from the user. But what about cases where we need to save many different values? For instance, what if we're using object-oriented code and have numerous nested objects within objects we need to store? At times like this, serializing objects with JSON is the easiest way to store the data. Without using JSON, you'd have to create a hidden form field for each value you want to save, whereas JSON can create string representations of complex data structures that you can easily eval back into JavaScript objects once they're fetched from the DOM.

So What is JSON, Anyway?

JSON (pronounced "Jason"), short for JavaScript Object Notation is a lightweight, human- and machine-readable way to represent the string serializations of objects. These strings can be evaluated back into JavaScript objects as needed. For instance, say I create a JavaScript object to represent a person (in this case, me):

var person = new Object();
person.name = "Jonah";
person.age = 24;
person.gender = "male";
person.location = "Seattle, WA";

The JSON representation of this object is as follows:

{
    'person': {
        'name': 'Jonah',
        'age': 24,
        'gender': 'male',
        'location': 'Seattle, WA'
    }
}

Then, if you need to reconstruct the object at a later point, you can simply eval the JSON string:

var jsonString = "{'person': {'name': 'Jonah', 'age': 24, 'gender': 'male', 'location': 'Seattle, WA'}}";
var person = eval( '(' + jsonString + ')' );    

console.assert(person.name == 'Jonah');
console.assert(person.age == 24);
console.assert(person.gender == 'male');
console.assert(person.location == 'Seattle, WA');

If you've written JavaScript using object literal syntax before, this should be familiar to you. The only minor difference between JSON and the standard JavaScript object literal syntax is that JSON requires quotes around key in a key/value pair. So, name is a valid JavaScript key but in JSON it would have to be 'name'. (Note: It doesn't matter if you use single- or double-quotes, as long as they are matched).

Stringifying Objects in JSON

To use JSON, it's necessary to include a library of JSON methods. Don't worry, the library is quite small. The entire thing shouldn't be more than 2k and can be obtained from json.org. Eventually, the JSON methods will be included as part of the core JavaScript language, but for the time being, we're left to use the methods provided by json.org or those found in libraries such as MooTools, Prototype and jQuery.

Depending on the library used, the method names for serializing an object into a JSON string are different. But, they are all used in rather similar fashion. For now, we'll assume you're using the library from json.org and use the method names provided in its API.

Saving complex JavaScript data structures as JSON strings (example 3)
// The variable userData is some necessary information we need from the user.
// The first time the user visits the page, they must enter this data manually.
// But, when leaving and returning to the page (or refreshing the page), we'll check
// if they already entered the data, and if so, restore it from a hidden form field.

var userData;

// Register event handlers
window.onload = function() {
 restoreState();
 if (!userData) {
    userData = new Object();
    userData.name = prompt('Please enter a name', 'Jonah');
    userData.age = parseInt(prompt('Please enter an age', '24'));
    userData.gender = prompt('Please enter a gender', 'male');
    userData.location = prompt('Please enter a location', 'Seattle, WA');                             
 }
 displayData(userData);
}

window.onbeforeunload = saveState;

// This function is called onbeforeunload and writes the userData to the hidden form field
function saveState() {
   document.getElementById('saved-data').value = JSON.stringify(userData);
}

// This function is called onload and checks if any data is present in the hidden form field
// If so, it defines userData to be the saved data
function restoreState() {
   var savedData = document.getElementById('saved-data').value;
   if (savedData != "") {
        userData = eval( '(' + savedData + ')' );
   }               
}

// This is a helper function that iterates through each property in an object and renders it in HTML.
function displayData(obj) {
 var list = document.createElement('ul');
 for (var property in obj) {
    var text = document.createTextNode(property + ': ' + obj[property])
     var line = document.createElement('li');
     line.appendChild(text);                 
     list.appendChild(line);
 }
 document.getElementsByTagName('body')[0].appendChild(list);
}

This example is pretty similar to the previous one where we saved a string. The only difference is that in this case, the string is a representation of a complex JavaScript object. In fact, you can save the entire state of your application in one JSON string, as long as the application state is completely stored as properties of a single object. There are a few minor gotchas, such as having to add parentheses around the JSON string when evaluating it. But, overall this is a clean and straightforward approach that is very useful when complex data structures must be retained.

Using Named Anchors to Save State

An alternate option for retaining state is to not actually let the user leave the page at all. Rather, when following links on the site, update the named anchor (everything after the number sign in a URL), instead of changing the actual document being displayed.

The problem that this is trying to solve is the fact that Ajax applications will normally break the back button. A user loads the application on the homepage and clicks to visit a different page, but since the new page is loaded in via Ajax, the browser URL doesn't change. Then the user clicks the back button and leaves the application altogether-- not the intention of the user, who just wanted to get back to the homepage.

Storing data in named anchors offers a solution to this problem. Each time the application state changes, JavaScript updates the named anchor with a token representing the application state. When the page is loaded, data is read from the named anchors and the state can be restored.

Say you're on the homepage of an ecommerce Ajax application and click on a product you'd like to view. Instead of changing URLs to the detail page, the application loads in new data with Ajax. So, when a user clicks on the new Brad Mehldau CD for instance, instead of going to a different URL (yoursite.com/brad-mehldau/) the document URL remains the same, but JavaScript updates the named anchor: yoursite.com/#brad-mehldau.

One site which does this unbelievably well is thesixtyone.com (Thanks, Derek!). The entire site resides in one document, truly a rich JavaScript application if I've ever seen one. But, despite the fact that the entire application is contained in a single document URL, due to clever use of named anchors, the site has full back button support and you can even email working links to friends.

Implementing code to save state in named anchors is out of scope for this article, but you can see how it is somewhat similar to saving data in hidden form fields. In this case, there are a few more issues to mitigate and it's somewhat tricky, but the reward is an Ajax site with fully functional back button support and the ability to share links -- worth all the effort, in my book.

So What Use is Ajax, Then?

Since we've made it this far, you might think that there is no use for Ajax in all this. Actually, Ajax is great for saving state to the server, especially for saving data beyond the lifespan of the browser session. Ajax can be used to save messages periodically (like how Gmail and Google Docs automatically save on a timer every few minutes). It can also be used to send data when the user leaves the page by capturing the onbeforeunload event, but this is unreliable and I would not depend on this Ajax request to complete. Instead, try to save the data before the user attempts to leave the page, by either firing the Ajax request on a timer or another event on the page (leaving focus on a form element, for example).

Some frameworks like Prototype have serialize() methods that return URL query string representations of objects. This is perfect for saving data through GET requests. Yes, GET requests have a 2000-character limit and other limitations, but in most cases this won't be an issue. Even without helper methods to serialize objects, it's a fairly simple matter to construct an Ajax request that will save the necessary data to the server.

Wrapping Up

To re-cap, it is a good practice to check if users are sure they want to leave a page when they are entering information, but it's even better to silently save that information for them. (Arguably you would want to do both, like how Gmail and Blogger save state and ask users if they are sure they want to leave the page). There are many different ways to save state, some purely client-side and others relying on saving data to the server with Ajax. The solutions which save data to the server are suitable for times when the data needs to be saved beyond the browsing session.

Of the two client-side solutions explored, hidden form fields and named anchors, the former is more suitable for conventional websites spanning many pages while the latter better suits single-page Ajax applications. Using named anchors also has the added benefit of allowing users to bookmark and send links to the JavaScript application in various states, and the state is preserved beyond the browsing session.

Whatever strategy you follow, your users will thank you for the time saved and frustration avoided of having to re-enter lost information.

Stylize the last element in jQuery

posted by Alex Grande
Here is how to stylize the border of the last element using jQuery.
$(document).ready(function() {
    $("table.innercart tr:last").css("border", "none");
});
You can compare to prototype by going here

Labels: ,

Stylize the last element in prototype

posted by Alex Grande
Here is an example of removing the last border in a list of elements.
Event.observe(window, "load", function() {
    $$(".homepageContainer .upcomingEvents").last().setStyle({
        border: 0
    });
});
You can compare to jQuery by going here

Labels: ,

Saturday, June 7, 2008

AJAX Google API to Minify Javascript using Ruby on Rails

posted by Jonah Dempcy

How to optimizing JavaScript performance in Ruby on Rails. This will only focus on one aspect of JS performance optimization, namely, writing a build script to concatenate/minify the JS, and setting up Rails to easily toggle between the compressed and normal files. Also, if your site uses a JavaScript library, we'll explore including it from Google's AJAX Libraries API.

The reason to do these optimizations are for client-side performance. Concatenating many files together into one is especially effective: 10 1-kilobyte files are much slower to download than a single 10k file. You might not think it makes much of a difference, but I estimate that removing 10 additional JS files, for instance, will shave 500-1000ms off latency. Plus, all of the time spent loading the JS will leave the page blank, if you put it in the head.

We'll need to be able to easily toggle between fully commented code and minified code, both for code hosted by us and code from the Google AJAX Libraries API. Since Google offers both minified and normal versions of the code, this should be no problem.

Including JavaScript in Ruby on Rails

First of all, let's look at how JavaScript is included in Ruby on Rails, the javascript_include_tag method. The method quite simply takes any number of source URLs and returns an HTML script tag for each of the sources provided. For instance:

  javascript_include_tag("script1.js", "script2.js", "script3.js")

Result:

<script type="text/javascript" src="/javascripts/script1.js"></script> <script type="text/javascript" src="/javascripts/script2.js"></script> <script type="text/javascript" src="/javascripts/script3.js"></script>

There's a little bit of magic you can do which is to include all JavaScript files in the /public/javascripts folder automatically, as well as having them concatenated into a single file. It looks like this:

  javascript_include_tag :all, :cache => true # when ActionController::Base.perform_caching is true

The resulting code is:

<script type="text/javascript" src="/javascripts/all.js"></script>

If you don't want them concatenated into a single file, e.g. for development purposes, you could do something like this:

# config/environments.rb:
DEBUG_JS = false;

# environments/development.rb:
DEBUG_JS = true;

# app/views/layouts/site.html.erb or wherever you include the JavaScript in your site:
<%=   javascript_include_tag(:all, :cache => true) if DEBUG_JS == false || ENV["RAILS_ENV"] != "development"
 javascript_include_tag(:all) if DEBUG_JS == true && ENV["RAILS_ENV"] == "development"
%>

I'm not sure how to integrate JSMin, YUICompressor or any other minification into the cached file, though, so I will explore the options for manual building and inclusion next.

Something else to remember is that caching needs ActionController::Base.perform_caching to be true in order to work. This is the default in production, but not development, though of course you can override it in your environments/development.rb file. For more information, view the docs for the javascript_include_tag.

JavaScript Concatenation and Minification

Next, we'll write a custom build script to generate a concatenated, minified version of our files. Ideally, Rails would automatically run it through a minifier, but I'm not sure how to set that up. In the meantime, this is a working solution that is pretty straightforward and requires minimal effort. I wish I were more skilled at shell scripting (and I highly urge readers to contribute their own code), but I'll show my implementation as an example regardless.

I'm on a Windows machine and wrote it as a batch file (.bat), but you can easily do this on linux, just using pipe (the | character) instead of the two right-angle brackets (>>). Here goes:

java -jar yuicompressor-2.3.5.jar "script1.js" -o main.js
java -jar yuicompressor-2.3.5.jar "script2.js" >> main.js
java -jar yuicompressor-2.3.5.jar "script3.js" >> main.js
java -jar yuicompressor-2.3.5.jar "script4.js" >> main.js

Nothing too sophisticated, just a straightforward script that generates main.js using the YUICompressor. To use it, simply run the batch file in the same folder as the JavaScripts and yuicompressor-2.3.5.jar. It will output a main.js file. I chose to run it separately for each individual file so if there are any errors, you can see where they occurred. If you concatenate all of the files into one and then compress it, debugging is difficult as the line number which threw the error is unknown.

I chose to use YUICompressor over JSMin since it seems to have better error messaging, warnings, and be a bit more strict. One of my former colleagues did some tests that determined that JSMin was faster in terms of using less CPU than YUICompressor (and certainly for minifiers which use eval(), such as Dean Edwards' packer). Unfortunately, I can't remember the specifics of the test or which platforms it was on, so that data is pretty much worthless. In any case, I decided on YUICompressor but you can use JSMin or whichever minifier you prefer.

If you are going to use YUICompressor, you must have Java installed and included in your path. If it isn't in your path, you can use the full path to Java, for instance, "C:\Program Files\Java\bin\java.exe" -jar [filename] [options].

Using the Google AJAX Libraries

Depending on your opinion about Google's recent offer to host the world's JavaScript frameworks with its AJAX Libraries API, you may not find this suggestion too useful, but I think it's helpful, especially to those without access to their hosting provider's response header settings, to present here. Without going into too much detail, suffice it to say that besides the caching benefit of using Google's API, their servers are configured "correctly" for best performance (far future expires headers, Gzip, etc).

One thing, though, is that you will want to include a minified version of the JavaScript in production and a full version in development, to ease debugging. It's impossible to debug minified JavaScript (don't even try), so we have to set a toggle like in the above example. Say your site is including jQuery. You may want to include a minified version on the live server but read the full code locally.

Toggling Minified Google JS

Here's how you can include minified jQuery in the production Ruby on Rails code while including the full jQuery library, comments and all, in development:

<%=   
javascript_include_tag("http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js",
                                            :all, :cache => true) if DEBUG_JS == false || ENV["RAILS_ENV"] != "development"

 javascript_include_tag("http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.js",
                                          :all) if DEBUG_JS == true && ENV["RAILS_ENV"] == "development"
%>
<

Or, if you didn't want to include everything in the JS folder, and wanted to granularly include only specific files, you might rewrite it like so:

<%= if DEBUG_JS == false || ENV["RAILS_ENV"] != "development"
   javascript_include_tag ("http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js",
                                            "main.js)
    elsif DEBUG_JS == true && ENV["RAILS_ENV"] == "development"
    javascript_include_tag("http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.js",
                                            "script1.js",
                                            "script2.js",
                                            "script3.js",
                                            "script4.js")
 end %>

That's all there is to it! Just switch the DEBUG_JS flag when you want to test in development mode with/without compression, and don't worry about when it's in production since it will always serve up the minified, concatenated JavaScript. The logic ensures that you won't accidentally start serving up the separate files in production, while giving you the flexibility to store your JavaScript in as many files as you like without negative consequences on the front end, as well as reading fully-commented code./p>

Labels: , , ,

Friday, June 6, 2008

Optimizing JavaScript in Ruby on Rails

posted by Jonah Dempcy

I'm learning Ruby on Rails for a current client of mine, and wanted to share this tip on how to optimizing JavaScript performance in Ruby on Rails. This will only focus on one aspect of JS performance optimization, namely, writing a build script to concatenate/minify the JS, and setting up Rails to easily toggle between the compressed and normal files.

The reason to do this is that it's faster for the client to download one file, compared to multiple files. In other words, 10 1-kilobyte files are much slower to download than a single 10k file. You might not think it makes much of a difference, but I estimate that removing 10 additional JS files, for instance, will shave 500-1000ms off latency. Plus, all of the time spent loading the JS will leave the page blank, if you put it in the head.

Monday, June 2, 2008

XAMPP All-in-one Web Development Stack

posted by Jonah Dempcy

It's time consuming setting up web development environments and any seasoned developer will have most likely spent countless hours debugging obscure configuration issues. Also, with the plethora of options available, some newer developers can have a hard time locating and choosing what to install.

It turns out that most dynamic websites use a fairly common setup known as LAMP, short for Linux, Apache, MySQL and PHP/Perl.

With this in mind, the developers at Apache Friends have released XAMPP, the X being an operating system of your choice. It comes with Apache, MySQL, PHP and Perl (along with a grab-bag of other goodies), and you can choose the package built for your OS.

Although I'm a sometime Ubuntu user, my current development environment is primarily on a Vista laptop. I grabbed the Windows XAMPP package and was amazed at how easy the "no-config" install was.

It includes:

Whew! They threw in everything and the kitchen sink. But, luckily the installation is painless and you don't have to worry about the stuff you might not use, plus it's nice to know it's there if you need it. phpMyAdmin makes managing the database a breeze and above all, everything just works.

After downloading and installing the package to C:\xampp (they recommend against placing it in Program Files on Vista due to file permissions issues), I took it for a spin by visiting http://localhost in my browser. The user is presented with a handy control panel, with links pointing to all of the various parts of the XAMPP bundle.

Note: I chose to run Apache and MySQL as services, meaning they will always start when the computer boots up, but you may choose to run them as standalone programs if you don't use them much. If you didn't choose to install it as a service and are not getting a response from http://localhost, make sure you're running Apache and MySQL. You can control which parts of XAMPP are running under Start / Programs / XAMPP. This is where you can start/stop servers as well as toggle them as services.


The XAMPP control panel for start/stop Apache, MySQL, FilaZilla & Mercury

Assuming everything is working correctly, visiting http://localhost should result in the following screen:

If you need more options, such as Java/JSP support or Python, check out the add-ons available:

  • Perl Addon with Mod_Perl and a selection important Perl Modules
  • Tomcat Addon (Requirement: SUN J2SE SDK must already be installed)
  • Cocoon for Tomcat Addon (Requirement: Tomcat Addon must already be installed)
  • Python Addon

A final note: This is for development purposes only, so it has absolutely no security. Don't even think about using this set up in a live environment. You've been warned. Don't believe me? Here's the list of security holes:

Here a list of missing security precautions in XAMPP:

* The MySQL administrator (root) has no password.
* The MySQL daemon is accessible via network.
* PhpMyAdmin is accessible via network.
* Examples are accessible via network.
* The user of Mercury and FileZilla are known.

Even in development, it's a good idea to set passwords and disable unused services. You can do so by changing the settings at http://localhost/security in a web browser.

For more info on XAMPP, visit the site the FAQ:

http://www.apachefriends.org/en/faq-xampp-windows.html

Or the forum:

http://www.apachefriends.org/f/

Happy XAMPPin'!

Labels: , ,