Javascript News

Syndicate content
Pipes Output
Updated: 48 weeks 6 days ago

How to Deal With Slow Times at Work

Mo, 2015-07-06 23:28

In high school, I ran my father’s drop store from 4 p.m. to 7 p.m. during the weekdays. I loved it because I got paid $12 per hour (which was a lot for kid who had no bills to pay) to do my homework since this was the slowest time of the day. Since before […]

Continue reading %How to Deal With Slow Times at Work%

Creating Scroll-based Animations using jQuery and CSS3

Mo, 2015-07-06 20:00

Creating movement is great way to provide an interesting and interactive experience for your viewers. With modern sites providing a greater deal of interactivity, it’s becoming increasingly expected that even simple websites will offer some level of animation / movement to engage their visitors.

Today I will be outlining a technique that you can adapt to your web projects - triggering animations when scrolling into a pre-defined region. These animations will be created using CSS transforms and CSS transitions. We will also use jQuery to detect when the elements are visible and to add/remove the appropriate classes.

For those who want to see examples of this in action, you can jump straight to the demos.

Why Trigger Animations on Scroll?

The main reason we would want to trigger animations on scroll, is so that they activate just as the user scrolls an element into view.

We might want to fade elements in, or provide an interesting transformation and these would only make sense when the user can actually view them.

Animating with CSS or with jQuery?

There are pros and cons to each approach. jQuery (read JavaScript) allows you to animate things that CSS doesn’t (such as the scroll position, or an element’s attributes), whilst CSS animations can be very attractive for developers who prefer putting all of their animation and presentation logic in the CSS layer.

I will be using transformations via CSS, however there are always variables to consider depending on your situation. I would take the following factors into account:

Browser Compatibility

Since our solution will be based on transformations, our browser compatibility will be limited to those that support either 2D transformations or 3D transformations.

All modern browsers will support 3D transforms and several of the older legacy browser such as Internet Explorer 9 and Opera 11.5 will support 2D transforms. Overall support for both desktop and mobile browsers is comprehensive.

jQuery’s animate method works in any (sane) browser, provided you are using the 1.X version of the library. jQuery 2.X removed support for IE8 and below, so only use this if you don’t need to support legacy browsers (lucky you!).

Speed

We want fast and smooth animations, especially when it comes to mobile devices. As such its always best to use transitions and transformations where possible.

The examples will use 3D transforms with 2D fall-backs for older browsers. We want to force hardware acceleration for speed, so a 3D transformation is a must (we will be using translate3d along with other functions that cause GPU accelerated rendering).

jQuery’s animate method is considerably slower than a GPU assisted transformation, so we will just be using jQuery for our event handling / calculations, not for our animation itself (as we want them to be as smooth as possible).

Side Note

We all know that jQuery !== JavaScript, right? Well, it turns out that using vanilla JS for animations might not be such a bad an idea after all. Whilst that is beyond the scope of this tutorial, here are two excellent articles on the subject for those who are interested in finding out more:

Now back to the show …

Detecting Animation Elements in View

The overall point of this technique is to look through all of our elements we marked as animatable and then determine if they are currently within the viewport. Let’s step through how we will achieve this:

Selector Caching

Scrolling is an expensive business. If you attach an event listener to the scroll event, it will fire many times over whenever a user scrolls the page. As we will be calling our dimension / calculation functions whenever a user scrolls, it is a good idea to store the elements returned by our selectors in variables. This is known as selector caching and avoids us querying the DOM over and over again.

In our script we will be referencing both the window object and the collection of elements we want to animate.

[code language="js"]
//Cache reference to window and animation items
var $animation_elements = $('.animation-element');
var $window = $(window);
[/code]

Notice the dollar sign in front of the variables. This is a convention to indicate that they hold a jQuery object, or collection of objects.

Hooking into the Scroll Event

Next, we create our event handler that listens for the scroll event. This will fire when we scroll the page. We pass it a reference to our check_if_in_view function (which we’ll get to in a minute). Every time the scroll event is fired, this function will be executed.

[code language="js"]
$window.on('scroll', check_if_in_view);
[/code]

Continue reading %Creating Scroll-based Animations using jQuery and CSS3%

Augmented Reality in the Browser with Awe.js

Mo, 2015-07-06 18:00

Augmented reality is a neat concept. We take a view of the world around us and supplement it with images, text, sound and video. Tech companies are starting to explore the possibilities of AR with devices such as the Meta Glasses, Microsoft HoloLens and Magic Leap. These very exciting AR headsets are not quite ready for consumer release yet, so it may be a little while before every household has a pair. However, there is another way of introducing the world to augmented reality using something they may have easier access to - the mobile browser.

I have previously covered other sorts of reality creation and manipulation using JavaScript and Three.js here at SitePoint in my articles on Bringing VR to the Web with Google Cardboard and Three.js and Filtering Reality with JavaScript and Google Cardboard. In this article, I'll show how you can use a JavaScript library called awe.js to create an augmented reality experience on the mobile web. We're going to create a 3D control board that opens on top of a paper marker. We can hook it up to do pretty much anything that can be enabled via a JavaScript HTTP request, so I've set it up to change the color of my LIFX lightbulb using IFTTT.

What You'll Need

For this demo, you'll currently need Google Chrome for Mobile. It may potentially work on Firefox for Mobile too, however I found click events didn't trigger for me when I tried it on my HTC One M9. It also works on some desktop browsers (Chrome and Opera worked quite nicely on my Mac) but definitely isn't quite the same experience as a smartphone with touch events. It could be neat on a tablet though.

You'll also need an IFTTT account and a knowledge of how to set up the the Maker Channel with rules that trigger on HTTP requests. If you're new to IFTTT, we previously went over the basics in the article on Connecting LIFX Light Bulbs to the IoT Using IFTTT. For those new to the Maker channel, we've also covered that in Connecting the IoT and Node.js to IFTTT.

Lastly, you'll need to print out a marker onto a piece of paper. The marker we'll be using is this one:

The code

If you'd prefer to get straight into the code and try it out, it is all available on GitHub.

Awe.js

Awe.js is a JavaScript library that uses Three.js, your device's camera and some pretty smart techniques to create augmented reality in the browser. You can download the library and some samples on the awe.js GitHub repo. It provides four different sorts of AR experiences, each with their own example in the repo:

  • geo_ar - Allows you to place objects at set compass points.
  • grift_ar - Compatible with an Oculus Rift.
  • leap_ar - Integrates with the Leap Motion controller.
  • marker_ar - Allows you to create an experience that is positioned on Augmented Reality markers. This is the one we'll be working with in this demo.
Our Augmented Reality Demo Code

Our demo code is over 300 lines long, but a lot of it is repeated code for similar objects. I'd recommend downloading the demo code from the demo's GitHub repo and following along with the explanations provided here. Once you've got an idea of how it all works, try tinkering away and building something of your own.

Everything starts within the load event on our window. The very first thing we include is a variable to track whether our AR control panel (I've called it a "menu" for short here) is open or not. Initially, it is closed.

[code language="js"]
window.addEventListener('load', function() {
var menu_open = false;

// Our code continues here
});
[/code]

Then, we start to use the awe.js library. Everything we do is defined within the window.awe.init() function. We start with some global settings for our AR scene.

[code language="js"]
window.awe.init({
device_type: awe.AUTO_DETECT_DEVICE_TYPE,
settings: {
container_id: 'container',
fps: 30,
default_camera_position: { x:0, y:0, z:0 },
default_lights: [{
id: 'point_light',
type: 'point',
color: 0xFFFFFF
}]
},
[/code]

  • device_type - All of the examples set this to awe.AUTO_DETECT_DEVICE_TYPE which requests it to detect the device automatically. So far I haven't seen a need to change this.
  • settings - Settings we may actually want to change live within here. These include:
    • container_id - The ID of the element our whole experience is going to be generated inside.
    • fps - Our desired frames per second (optional).
    • default_camera_position - The default camera position that we will be viewing our scene from (we're starting it at (0,0,0)).
    • default_lights - We can set up an array of different Three.js lights for our scene, giving each an ID, defining the type of light it is and its color. Our demo has only one white Three.js PointLight. There are a range of options available for the type of light, which correspond to different types of Three.js lights - 'area', 'directional', 'hemisphere', 'point' and 'spot'.

Once our settings are in place, we then define what to do when awe.js has initialised. Everything is wrapped within a awe.util.require() function which defines what browser capabilities it requires before loading additional JavaScript files we'll need. Be careful to only define the browser capabilities you do need for the demo, as you can unnecessarily prevent your AR app from working in some browsers if you define these incorrectly using capabilities listed in some of their other GitHub examples. For example, in order to have elements positioned based upon compass points you need access to the 'gyro' capability. That won't work on most desktop browsers. We don't need that in this demo, so we exclude it.

[code language="js"]
ready: function() {
awe.util.require([
{
capabilities: ['gum','webgl'],
[/code]

The files that are defined pull in specific functionality for awe.js - lib/awe-standard-dependencies.js, lib/awe-standard.js and lib/awe-standard-window_resized.js each are pretty common, defining the standard bits and pieces for awe.js and handling window resizing. Our demo uses markers, which requires the other two files listed below those.

[code language="js"]
files: [
['lib/awe-standard-dependencies.js', 'lib/awe-standard.js'],
'lib/awe-standard-window_resized.js',
'lib/awe-standard-object_clicked.js',
'lib/awe-jsartoolkit-dependencies.js',
'lib/awe.marker_ar.js'
],
[/code]

Once we've got all of those files successfully loaded, we run the aptly named success() awe.js function. The first function you'll always run when you're ready to start displaying elements sets up the awe.js scene.

[code language="js"]
success: function() {
window.awe.setup_scene();
[/code]

All elements in awe.js are positioned within "Points of Interest" (POI). These are specific points in the scene marked via coordinates that objects can be positioned inside of. You can move POIs around within awe.js as well as elements themselves. We create a single POI which will be placed wherever a specific paper marker is seen. To create a POI, we use the awe.js function of awe.pois.add().

I've given it an ID of 'marker' but you could call it anything you'd like, as long as you are consistent throughout other references to this POI in the code. We set its initial position to be (0,0,10000), which positions it off into the distance a bit until we're ready to use it. We also set it to be invisible until we spot the marker.

[code language="js"]
awe.pois.add({id: 'marker', position: {x: 0, y: 0, z: 10000}, visible: false});
[/code]

Elements we add into our POIs are called "projections" within awe.js. The first projection we add into our scene I've called 'wormhole', as this is a flat black square where our menu items will magically appear out of. Just as the ID of the POI, you could name yours absolutely anything, as long as you keep it consistent with other references to it in your code. We add it into our POI using the function awe.projections.add().

[code language="js"]
awe.projections.add({
id: 'wormhole',
geometry: {shape: 'plane', height: 400, width: 400},
position: {x: 0, y: 0, z: 0},
rotation: {x: 90, z: 45},
material: {
type: 'phong',
color: 0x000000
}
}, {poi_id: 'marker'});
[/code]

There are quite a few options for the objects we can add as projections, so I'll explain them in more detail. Take note - all x, y and z values here for positioning and rotating are in relation to its POI. That POI is defined at the very end by its ID as {poi_id: 'marker'}.

  • geometry - This refers to the projection's Three.js geometry options. The options required for each type of geometry match those provided in awe.js. For example, SphereGeometry in Three.js would be represented as {shape: 'sphere', radius: 10} in awe.js. One thing to note for those using the latest Three.js, in the currently available version of awe.js, BoxGeometry is still using CubeGeometry. So, to create boxes, we use the format {shape: 'cube', x: 20, y: 30, z: 5} (despite the name, it does not need to be a "cube").
  • position - You can adjust the item's x, y and z axis in relation to its POI.
  • rotation - You can rotate the item by its x, y and z axis in relation to its POI. I rotate the wormhole 90 degrees on its x axis so that it sits flat on the table and 45 degrees by its z axis as I thought that looked more natural (it doesn't quite line up exactly with the marker at all times, so having it on a diagonal makes this less obvious).
  • material - This defines the projection's Three.js material. I've stuck to using 'phong' (MeshPhongMaterial in Three.js), however it looks like 'lambert', 'shader', 'sprite' and 'sprite_canvas' are also potentially available as options. We can also define its color in hex.
  • texture - This is not used in the demo but I wanted to include it in this article for completeness. To define a texture, you can include texture: {path: 'yourtexturefilename.png'}.

In the demo, I add seven different boxes/cubes to the scene, each one is 30 pixels high and placed 31 pixels lower on the y axis so that it is originally hidden by the wormhole. They're all slightly different widths to make them look a bit like a lightbulb.

Continue reading %Augmented Reality in the Browser with Awe.js%

Building a VPS with WordPress on a LEMP Stack

Mo, 2015-07-06 17:00

With site performance a key metric in Google’s ranking algorithms, more WordPress users are turning to dedicated, managed WordPress hosting such as WP Engine, Media Temple or SiteGround.

However, these can be expensive solutions for some, with costs starting at ~$30/site per month.

If you’re comfortable with basic server administration and WordPress, it’s possible to setup your own, inexpensive hosting for small WordPress websites, that matches the performance of managed WordPress hosting providers using a LEMP Stack (Linux, nginx, MySQL, PHP).

Creating a DigitalOcean Virtual Private Server (VPS)

This article uses DigitalOcean, but you can equally use services such as Linode, Vultr or a number of other providers. The other sections of this guide will equally apply, regardless of who you're using.

Continue reading %Building a VPS with WordPress on a LEMP Stack%

How to Build Multi-step Forms in Drupal 8

Mo, 2015-07-06 16:00

In this article, we are going to look at building a multistep form in Drupal 8. For brevity, the form will have only two steps in the shape of two completely separate forms. To persist values across these steps, we will use functionality provided by Drupal’s core for storing temporary and private data across multiple […]

Continue reading %How to Build Multi-step Forms in Drupal 8%

Rapid Prototyping Compositions with Adobe Comp CC (iPad app)

Mo, 2015-07-06 15:00

Everything is moving towards speed and mobility. Many people, including designers are at least augmenting their desktop and laptop computers with mobile devices such as tablets. With this being said, more companies are focusing on apps you can use to increase your productivity.

Imagine sitting on the subway on the way back to your office and creating a mockup of a website along the way.

Imagine having an idea for a print poster, and getting the layout put together while you're in transit. When you get back, you can bring your idea into your more powerful devices for better design & development. In comes Adobe Comp CC, an app focused on creating quick prototyping on your iPad. Now, you can just right into your ideas and start fleshing them out.

Premade Layouts

The great thing about Adobe Comp CC is that it's free. Search for it on the App store, and download it without paying a penny. Then, install it and fire it up to get started immediately. When you start the app, you have three options to choose from. You can select Mobile, Print, or Web, and they have pre-made layouts set up for you.

Gestures

You can draw your layouts with simple gestures. Everything is simplified, to make it quick and easy to draw even complex shapes and useful design objects. For example, you can draw a square with a circle in one of the corners to create a rounded rectangle. A circle or a rectangle with an x over it creates an image box or a circular image box. You can see the list of gestures, shown in the screenshot above.

Options

The gear icon in the top right corner of the app gives you the option to choose from different document sizes. You don't have to simply remember common document sizes. Adobe has put together a lot of common options for you. You can start with a variety of iPhones, iPads, letters, and web sizes. It even lists the pt and pixel sizes below each option.

Editing Premade Layouts

Moving around the canvas is as simple as swiping, pinching to zoom, and spreading your fingers apart to zoom out, just like any other app. It's even more important in an app like this, because of the simple refines you'll need to make. You also may have small elements in your design that you need to edit. You can tap an element to edit it, and hitting the + symbol below a premade layout allows you to make a copy. I'd make a copy before making any edits.

Once you're into the layout, you have several different options. You can select elements simply by tapping. Also, you open up more options, such as shapes you can make, as well as text, and photos. These menus open up even more options to work with.

Shapes

You can choose from a collection of basic shapes such as circles, rectangles, and horizontal & vertical lines. You can also bring in any custom shapes libraries you have stored in the Creative Cloud.

Text

Just like a typical design program, you can have presets for type. You can control whether certain text is in headline format, sub headline and paragraph text.

Continue reading %Rapid Prototyping Compositions with Adobe Comp CC (iPad app)%

WebSockets in the Ruby Ecosystem

Mo, 2015-07-06 14:00

[caption id="attachment_109370" align="alignleft" width="300"] Ruby is learning to love WebSockets[/caption]

What the heck is a "WebSocket", exactly? Some of us have heard about the changes that are coming to Rails with regard to WebSockets (e.g. Action Cable in Rails 5) but it's a bit difficult to pinpoint exactly what WebSockets are meant to do. What problem do they solve and how do they fit into the realm of HTTP, the web, and Ruby? That's what we'll cover in this article.

Why?

Let's dial the time machine to the beginning of the web. Way back in the day, as we all know, websites consisted of static pages with a bunch of links between them. These pages are called "static" because nothing about them really changes on a per-user sort of basis. The server just serves up the same thing to every single user based on the path that the user requests. We quickly realized that this sort of thing was all well and good if all we wanted the web to be was the equivalent of an easily available book, but we could actually do a lot more. So, with input elements and CGI (Common Gateway Interface - a way for external scripts to talk to the web server), dynamic elements creeped into web pages.

Now, we could actually process input data and do something with it. As websites got busier, we realized that CGI was pretty terrible at scaling. Along came a slew of options such as FastCGI to remedy this problem. We came up with all sorts of frameworks to make writing back-ends a lot easier: Rails, Django, etc. All this progress happened, but at the end of the day, we were still serving up some HTML (through a variety of methods), the user was reading this mostly static HTML and then requesting some different HTML.

Then, developers realized the power of Javascript and communication with the server through AJAX. No longer were pages just blobs of HTML. Instead, Javascript was used to alter the content of these pages based on asynchronous communication with the server. Often, state changes that occurred on the server had to be reflected on the client. Taking a very simple example, maybe we want a notification to show up on the admin panel when the number of users on the server exceeds a certain limit. However, the methods used to do this sort of thing weren't the best. One common solution was HTTP long polling. With it, the client (i.e. Javascript code running in the browser) sends the HTTP server a request which the server keeps open (i.e. the server doesn't send any data but doesn't close the connection) until some kind of update is available.

You might be wondering: Why do we do this waiting stuff if the client could somehow just ask the server to tell the client when an update comes along? Well, unfortunately, HTTP doesn't really let us do that. HTTP wasn't designed to be a server-driven protocol. The client sends the requests, the HTTP server answers them. Period. Long polling isn't really a great solution since it causes all sorts of headaches when it comes to scaling, users switching between Wi-Fi and cellular, etc. How do we solve this problem of letting the server talk to the client?

Continue reading %WebSockets in the Ruby Ecosystem%

On Our Radar: Time, Responsive Design and Misplaced Commas

Sa, 2015-07-04 17:00

If you’re happy and you know it, syntax error! In a week of misplaced semicolons and forgotten commas, we had a lot to talk about. On Our Radar mikey_w, regularly featured in this series, is excited to be learning more about responsive web design (and adaptive design, but that’s another matter entirely). They want to […]

Continue reading %On Our Radar: Time, Responsive Design and Misplaced Commas%

A Beginner’s Guide to Handlebars

Fr, 2015-07-03 18:00
A Beginner’s Guide to Handlebars

Nowadays the majority of the Web consists of dynamic applications in which the data keep changing frequently. As a result, there is a continuous need to update the data rendered on the browser. This is where JavaScript templating engines come to the rescue and become so useful. They simplify the process of manually updating the view and at the same time they improve the structure of the application by allowing developers to separate the business logic from the rest of the code. Some of the most well-known JavaScript templating engines are Mustache, Underscore, EJS, and Handlebars. In this article we’ll focus our attention on Handlebars by discussing its main features.

Handlebars: What it is and Why to Use it

Handlebars is a logic-less templating engine that dynamically generates your HTML page. It’s an extension of Mustache with a few additional features. Mustache is fully logic-less but Handlebars adds minimal logic thanks to the use of some helpers (such as if, with, unless, each and more) that we’ll discuss further in this article. As a matter of fact, we can say that Handlebars is a superset of Mustache.

Handlebars can be loaded into the browser just like any other JavaScript file:

Continue reading %A Beginner’s Guide to Handlebars%

Browser Trends July 2015: Stalled Safari?

Fr, 2015-07-03 17:00

In last month's browser chart, Chrome was inching toward the 50% milestone. What do June's StatCounter statistics reveal?…

Worldwide Desktop & Tablet Browser Statistics, May to June 2015

The following table shows browser usage movements during the past month.

Browser May June change relative IE (all) 18.28% 18.49% +0.21% +1.10% IE11 10.83% 11.33% +0.50% +4.60% IE10 1.87% 1.83% -0.04% -2.10% IE9 2.18% 2.20% +0.02% +0.90% IE6/7/8 3.40% 3.13% -0.27% -7.90% Chrome 49.36% 49.77% +0.41% +0.80% Firefox 16.39% 16.09% -0.30% -1.80% Safari 5.76% 5.41% -0.35% -6.10% iPad Safari 5.06% 5.14% +0.08% +1.60% Opera 1.62% 1.62% +0.00% +0.00% Others 3.53% 3.48% -0.05% -1.40%

Continue reading %Browser Trends July 2015: Stalled Safari?%

Turning a Crawled Website into a Search Engine with PHP

Fr, 2015-07-03 16:00

In the previous part of this tutorial, we used Diffbot to set up a crawljob which would eventually harvest SitePoint’s content into a data collection, fully searchable by Diffbot’s Search API. We also demonstrated those searching capabilities by applying some common filters and listing the results.

In this part, we’ll build a GUI simple enough for the average Joe to use it, in order to have a relatively pretty, functional, and lightweight but detailed SitePoint search engine. What’s more, we won’t be using a framework, but a mere total of three libraries to build the entire application.

You can see the demo application here.

This tutorial is completely standalone, and as such if you choose to follow along, you can start with a fresh Homestead Improved instance. Note that in order to actually fully use what we build, you need a Diffbot account with Crawljob and Search API functionality.

Bootstrapping

Moving on, I’ll assume you’re using a Vagrant machine. If not, find out why you should, then come back.

On a fresh Homestead Improved VM, the bootstrapping procedure is as follows:

composer global require beelab/bowerphp:dev-master mkdir sp_search cd sp_search mkdir public cache template template/twig app composer require swader/diffbot-php-client composer require twig/twig composer require symfony/var-dumper --dev

In order, this:

  • installs BowerPHP globally, so we can use it on the entire VM.
  • creates the project’s root folder and several subfolders.
  • installs the Diffbot PHP client, which we’ll use to make all calls to the API and to iterate through the results.
  • installs the Twig templating engine, so we’re not echoing out HTML in PHP like peasants :)
  • installs VarDumper in dev mode, so we can easily debug while developing.

Continue reading %Turning a Crawled Website into a Search Engine with PHP%

Confessions of a SitePoint Editor

Fr, 2015-07-03 14:45

I’ve been writing since before it was cool. I discovered my passion at the ripe age of 20. It started in 2010, when I admitted I hated my college major. I applied for a spot in University of Central Florida’s (UCF) journalism program. Shortly after, I was accepted. And shortly after that, I failed my […]

Continue reading %Confessions of a SitePoint Editor%

Implementing Lazy Enumerables in Ruby

Fr, 2015-07-03 14:23

I've always been fascinated by Ruby's lazy enumeration, a feature that was introduced in Ruby 2.0. In this article, we take a deep dive into lazy enumeration, learning how Ruby implements this interesting programming technique by making your own lazy enumerable.

What exactly is lazy? Is Ruby trying to slack off? Being lazy refers to the style of evaluation. To let this sink in, consider the opposite of lazy evaluation: eager evaluation.

There really isn't much to talk about regarding eager evaluation, as it's the standard way Ruby works. But sometimes, being eager is a bad thing. For instance, what do you think this evaluates to:

[ruby]
irb> 1.upto(Float::INFINITY).map { |x| x * x }.take(10)
[/ruby]

You might expect the result to be [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]. Unfortunately, the Ruby interpreter doesn't know when to stop. The problem here is 1.upto(Float::INFINITY). 1.upto(Float::INFINITY) represents an infinite sequence. How does it look like in the console?

[ruby]
irb> 1.upto(Float::INFINITY)
=> #
[/ruby]

No surprise here, an enumerator is returned. Let's try to force values out from the enumerator using Enumerator#to_a:

[ruby]
irb> 1.upto(5).to_a
=> [1, 2, 3, 4, 5]
[/ruby]

To infinity and beyond!:

[ruby]
irb> 1.upto(Float::INFINITY).to_a
[/ruby]

You shouldn't be surprised by now that this will lead to an infinite loop. Notice that Enumerator#to_a is useful to "force" values out of an enumerator. In fact, Enumerator#to_a is aliased to Enumerator#force. This is useful, as you will see later on, when you want to know all the values produced by an enumerator.

Continue reading %Implementing Lazy Enumerables in Ruby%

Darwin and the Art of Web Design

Fr, 2015-07-03 14:00

Once upon a time there was a man called Darwin. This man came up with an idea that has come to be known as ‘the survival of the fittest’.

This is probably one of the most misused terms in history, but the concept is based on his theory of natural selection, and it is natural selection that I believe can be applied to web design to create better and more usable websites and web applications.

Natural selection is the idea that something (in Darwin's case, some population of animals or plants) will change, as environmental conditions change.

Change in this context means they are somehow better adapted to that new environment, which then means that they can reproduce more and so propagate their traits (or features and elements in the case of web design). This then ultimately results in that changed version of the species becoming successful.

I believe this fundamental concept of adaptation to changing environments making more successful, better-adapted species can also be applied to web design.

This is not an article about how to design your website and which design elements to use, it is more about an approach to web design, based as much on science as art.

Applying Darwinism to Web Design Constraints and Trade-offs

Just like in the natural world, a number of variables constitute the ‘environment’ in Darwinian web design. Within any environment there exist constraints. These constraints result in a trade-off in one way or another.

A brilliant example in the natural world is the tail of the peacock. While his extravagant tail attracts many prospective mates, making it easy to pass on his genes, at the same time that heavy, flashy tail is a serious burden when it comes to eluding predators.

There is a trade-off between the tail's attractiveness for females and the survival of the bird itself.

In the digital world these trade offs are built around design variables. You need to understand what these variables are so you can work to optimize them against each other and any other constraints. Map out what makes up your current working environment before beginning your design outline.

The Variables of Design

The most obvious variable you need to define is your audience. The demographic of an audience can dictate various aspects of web design, including the level of accessibility and tone of the copy. Your audience may constrain the design.

For instance, a very young audience may dictate the tone of your copy. Other variables include the technical constraints of the underlying application (if you are building a web front end) or the limitations of HTML itself and your editing tool choice at the time of design.

Further variables might be the types of devices you need to consider in your design.

For a project aimed at high-end designers, you may be able to focus on desktop / laptop sized screen sizes. On the other hand, if you’re designing for a broad, mass market site, then you really need to be responsive to all kinds of devices and connections. The recent mobilegeddon debacle, where Google has begun to rank responsive websites more highly has also made responsive design a must.

Whatever your particular design criteria, know them first so that you understand the constraints and advantages of your ecosystem.

Otherwise you'll be the prettiest dead peacock in town.

Continue reading %Darwin and the Art of Web Design%

Using the Media Capture API

Do, 2015-07-02 21:00

Today I’d like to experiment with the Media Capture and Streams API, developed jointly at the W3C by the Web Real-Time Communications Working Group and the Device APIs Working Group. Some developers may know it simply as getUserMedia, which is the main interface that allows webpages to access media capture devices such as webcams and microphones.

You can find the source code for this project on my GitHub. Additionally, here’s a working demo for you to experiment with. In the latest Windows 10 preview release, Microsoft added support for media capture APIs in the Microsoft Edge browser for the first time. Much of this code was taken from the Photo Capture sample that the Edge dev team produced at their test drive site.

For those of you who want to dive a bit deeper, Eric Bidelman has a great article at HTML5 rocks which goes into the storied history of this API.

Getting Up to Speed

The getUserMedia() method is a good starting point to understand the Media Capture APIs. The getUserMedia() call takes MediaStreamConstraints as an input argument, which defines the preferences and/or requirements for capture devices and captured media streams, such as camera facingMode, microphone volume, and video resolution.

Through MediaStreamConstraints, you can also pick the specific captured device using its deviceId, which can be derived from the enumerateDevices() method. Once the user grants permission, the getUserMedia() call will return a promise with a MediaSteam object if the specific MediaStreamConstraints can be met.

All of this without needing to download a plugin! In this sample we’ll be diving into the API and creating some neat filters on the video and images we’ll capture. Does your browser support it? Well getUserMedia() has been around since Chrome 21, Opera 18, and Firefox 17, and is now working in Edge.

Continue reading %Using the Media Capture API%

Introduction to the Fetch API

Do, 2015-07-02 20:00

For years, XMLHttpRequest has been web developers trusted sidekick. Whether directly or under the hood, XMLHttpRequest has enabled Ajax and a new type of interactive experiences, from Gmail to Facebook.

The Fetch API aims to replace XMLHttpRequest as the foundation of communication with remote resources. How this new API looks like and what problems it solves is the topic of this article.

The Fetch API

The Fetch API provides a fetch() method defined on the window object, which you can use to perform requests. This method returns a Promise that you can use to retrieve the response of the request.

To illustrate the Fetch API, we'll use a few lines of code that retrieve photographs using the Flickr API and insert them into the page. At the time of writing, this API isn't well supported. So, to have the code working I suggest you to try it using the last stable version of Chrome, which is version 43. Also note that the code needs you to replace your API key in place of the placeholder I set ("your_api_key").

As the first task, you have to retrieve a few pictures of penguins from Flickr and display them in your page.

Continue reading %Introduction to the Fetch API%

Android Design Anti-Patterns and Common Pitfalls

Do, 2015-07-02 15:00

The more apps behave the way we expect them to, the more intuitive they are to use; the more intuitive they are to use, the easier it is for us to concentrate on our true objective.

The best user interfaces are so intuitive that the UI just disappears and lets us concentrate on what truly matters. People tend to be unaware of the user experience in an app unless it doesn't meet their expectations.

According to Wikipedia, an anti-pattern (or antipattern) is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. In this article we'll look at some anti-patterns and bad practices common in some Android applications, that get in the way of the user accomplishing their tasks, thus providing a poor user experience.

The Straight Port

The Straight Port is an app that was first made for another platform (usually iOS) and was later quickly and minimally made to work for Android. This usually results in Android apps that have the visual styling and UI conventions of other platforms.

The "design once, ship anywhere" approach rarely works. Different platforms have different rules and guidelines regarding UI and usability and you have to take this into consideration when designing for a particular platform. Your users expectation and behavior has been shaped and influenced by using other apps on that platform.

If your app doesn't meet these, it's bound to cause frustration. Android users expect Android apps, so it's worth looking through the Android design documents to be conversant with the platform's conventions.

A few common pitfalls of the straight port are:

1. Bottom Tab Bars

On Android, tabs belong at the top.


(Source:http://www.google.com/design/articles/design-from-ios-to-android/)

2. Using iconography from other platforms

3. Right pointing carets on list items

(Source: http://developer.android.com/design/patterns/pure-android.html)

For more on this, you can read this guideline on how to design for Pure Android. There is also this article on designing from iOS to Android that is more recent as it covers designing for Android in the era of Material Design.

Designing for One Form Factor

Unlike other platforms where you can determine the device your app will run on - either phone or tablet and know the screen sizes, on Android, this isn't possible. You must therefore design your app to be adaptive so that it works well on phones as well as tablets. The screen sizes of these devices also vary so you must take this into account. A well designed Android app works well and looks good on any device and screen size.

Other than designing for phones and tablets, you should also ensure that your design doesn't break when the user changes the device orientation. You should design for both portrait and landscape modes.

Don't assume that the user will only use the app in portrait, and neglect landscape orientation. When the developer doesn't provide specification for landscape orientation, the Android system tries to lay out the UI as well as it can with what it has. This usually results in the same UI seen in portrait mode spread out to fill the larger landscape orientation. Elements are usually stretched out and greatly spaced out on the screen.

Small Touch Targets

Small touch targets can slow down a user as they increase the chances the user has of making a wrong selection if the target is next to other targets. The app may also seem to be non-responsive as a user taps on what they think is the area affected by the touch and see no noticeable action take place.

On Android, the ideal size of touch targets is at least 48dps. The material design specifications document provides guidelines for keylines and metrics you can use when designing your apps.

Neglecting Touch Feedback

Selections need to be immediately obvious. Touchable elements should have a pressed and focused state. Not giving a user feedback when they take an action increases the app's perceived latency - the app seems slower.

Selected items are made obvious by use of color and shape (e.g. making an icon/font bold). In material design, shadows are used to show that an element is at the forefront.

Material design has emphasized the use of touch feedback by not only making use of shadow, color and shape, but by also strongly encouraging the use of animations and transitions to give the user feedback. The following are some points from from the Material Design Guide.

Continue reading %Android Design Anti-Patterns and Common Pitfalls%

Mastering Less Guards and Loops

Do, 2015-07-02 14:00

In the previous article we learned the basics of the Less mixin guards and loops. We saw that once we’ve gained a clear understanding of their structure we can use them properly. Now, we’ll explore how this knowledge can be put to practice by examining some real world examples. Creating Alert Boxes The first example […]

Continue reading %Mastering Less Guards and Loops%

Lessons Learned Developing the 99designs Tasks API

Mi, 2015-07-01 22:00

Dennis works at 99designs, and in this article he describes the choices, experiences and takeaways from building an API to complement the new Tasks service.

“Wanna see something cool?”

Two years ago my boss, Lachlan, came to me with a secret side project he'd been hacking on for getting small design tasks done quickly. The idea was for customers to submit a short design brief that would be matched to a designer who's waiting and ready to go.

We assembled a small team and spent the next two months working like crazy to get an initial public release out and prove it was something people wanted.

Lachlan's original vision included the idea of an API for graphic design work along the lines of Amazon's Mechanical Turk—but more beautiful and tailored specifically for graphic design.

We've continued to refine 99designs Tasks as a consumer service, but the vision of an API for graphic design continued to stick in our minds. We just weren’t quite sure if it was worth doing or not.

A Turning Point

Then we were caught by surprise. We discovered that our friends at Segment had built an automated tool using PhantomJS, which screen-scraped our website to work around the fact we hadn’t built an API yet!

I’ll admit—it was pretty embarrassing to be beaten to the punch by a customer on an idea we’d been thinking about for two years. On the other hand, there's probably no better way to validate an idea than to have a customer do it for you.

OK, Let’s Do This

We quickly realized that developing an API would involve more than just the technical design and implementation. The audience and concerns of an API are very different from anything we’d worked on before.

Scoping It Out

One of the first challenges we faced was figuring out what features we wanted in an API. We already had a few API customers that needed a fairly specific feature set for creating tasks—but we wanted to open up the possibilities and allow ideas we hadn’t even begun to think of yet. We made the decision to expand the scope beyond task creation to cover the entire task workflow in order to enable a wider set of possibilities.

Developer Marketing

Another big question we faced was how to market an API? Our customers are typically entrepreneurs and small businesses. The target audience of an API is very different from what we're used to.

There's a saying that "if you build it, they will come", but I'm not convinced it's quite true. We knew there was some hard work to be done if we wanted to attract developers. There are a few questions that developers might ask when looking at an API:

"Why should I be excited?"

"What are the benefits?"

One strategy we had for attracting developers was to have a collection of compelling examples that would inspire new ideas. The problem is that building these examples requires us to attract developers to build them in the first place. We tackled this chicken/egg problem by working with a group of launch partners in a private beta ahead of our public launch to build some great quality applications that we could show off.

We also applied a "dogfooding" approach at 99designs, where our development team worked on a number of app ideas internally—including a browser extension and a chat bot, all of which used our in-development API.

Continue reading %Lessons Learned Developing the 99designs Tasks API%

How to Grunt and Gulp Your Way to Workflow Automation

Mi, 2015-07-01 21:00

This article is part of a web dev series from Microsoft. Thank you for supporting the partners who make SitePoint possible. When you are new to front-end development and start mastering HTML5, CSS and JavaScript, the obvious next step is to put your hands on tools that most developers use to stay sane in this […]

Continue reading %How to Grunt and Gulp Your Way to Workflow Automation%