Javascript News

Syndicate content
Pipes Output
Updated: 11 min 46 sec ago

How to Do a Content Audit of Your Website

10 hours 11 min ago

Most people break out in a cold sweat when they see the word “audit.”

The good news is this post is not about traditional audits. It’s actually about conducting a content audit on your website. However, just as with your personal finances, content audits are necessary and when done correctly, are extremely beneficial for your website.

Depending on the size of your website, content reviews may take up to a few hours, but the benefit you’ll reap from it will far outweigh your investment. You can use them to find and correct SEO errors, discover which practices produce the best results, and improve your content production process.

In this tutorial, you will learn everything necessary to conduct your first content audit, without reinventing the wheel. Here are the steps we will take:

  1. You will learn how to gather primary data from sources such as Google Analytics;
  2. Then you’ll learn how to combine the data from different sources into one spreadsheet;
  3. Finally, you will receive tips on how to use the information you have collected in order to gain insight and find opportunities to improve your content.

All of the tools recommended in this guide are either completely free or have a trial version.

You’ll need to use spreadsheet software to collect and manipulate the information for your audit. Which provider you choose is up to you. The most popular are Microsoft Excel and Google Spreadsheets. I recommend the latter because it is free and allows you to easily pull in and manipulate data from external sources such as Google Analytics, which I show you how to do later.

However, people have complained that Google Spreadsheets tends to run slow as your spreadsheet grows. So if your site is pretty large, you may want to use Excel.

Step 1: Gather SEO data Generate a list of all URLs using Screaming Frog

The easiest way to gather data from your website is by downloading and running Screaming Frog on your computer. This software is specifically created for SEO experts. While its free version is very useful (and will do for this content audit), most of the advanced features are only available to paid users.

Run the report for your site and export the results into a .csv file. Then upload (or paste) the file into a Google Spreadsheet.

Screaming Frog will give you a lot to work with and it is up to you to choose which information is most relevant to your analysis. Here are the data points I find useful and why.

  • URL (duh)
  • Title (keywords)
  • Title length (characters)
  • Meta description (keywords)
  • Meta descriptions length (characters)
  • Word count
  • H1 and H2 headings (keywords)

I will cover these points in more detail, and elaborate on why they are important later in this tutorial.

Pulling in data from Google Analytics

Once you have collected all of your URLs in a spreadsheet, it’s time to grab even more data, and add it to your spreadsheet. As I mentioned in the beginning, I use Google Spreadsheets because it allows you to pull in data from Google Analytics. Here’s how to do it.

First, go to the Add-ons menu in your spreadsheet and make sure you have Google Analytics installed (if you don’t, use Add-ons > Get add-ons… to enable it).

Select Add-ons > Google Analytics > Create New Report:

A new window will appear with several options:

  • Name your report (for example “ Unique Pageviews”)
  • Choose the site for which you’re collecting stats
  • Choose the metrics and dimensions you’ll be using - in our example, the metric will be Unique Pageviews and the dimension Pages

Click on Create Report and a new sheet will appear with the details of the configuration you just created:

You can use this worksheet to change some of the details of your report.

For example, in the example above I have set the report to collect data from Analytics for one year back (Last N Days field) and have included only pages which have at least 10 unique visits (by adding ga:uniquePageviews>9 to the Filters field).

When you are happy with your configuration, select Add-ons > Google Analytics > Run reports.

Depending on the size of your site and the configuration of the report, it might take a few minutes, but in the end you should get a new sheet with the data you requested.

This is just a small example of what you can achieve by pulling data from Analytics. The simplest way to improve your analysis is by adding more metrics, here are some that you should consider:

  • (Unique) visits: already covered this in the example above
  • Entrances: very useful when analyzing the performance of your landing pages
  • Bounce rate: learn which pages have high/low bounce rates and why
  • Time on page: learn which part of your content is engaging and keeps visitors reading until the end
Collect data about social shares

Armed with the list of URL’s you get from Screaming Frog, head over to SharedCount and use the Bulk Upload function. Note: You’ll need to be logged in to use it - registering is free.

Run the report and export it to .csv to quickly add it to your spreadsheet.

SharedCount gives you stats from Facebook, Twitter, Google+, LinkedIn, Pinterest and StumbleUpon, but it’s up to you which of these to keep for your audit. For my site I find Twitter, Facebook, G+ and LinkedIn to be the most relevant.

Step 2: Putting all SEO data in one place

By now you should have multitudes of data scattered across different parts of your spreadsheet. It’s time to combine them so that you can analyze and draw insights quicker.

Here’s how we’ll do it.

Please note: We’ll be using the VLOOKUP function a lot in this section, so if you aren’t sure how it works, it’s a good idea to watch a primer video.

Choose one sheet that will serve as the Mastersheet for all your data. Of course, this could be a new sheet, but the one where you uploaded the data from Screaming Frog is also a good candidate since you need to start with this data anyway.

It is very likely that some of the URLs Screaming Frog collects will not be needed for your audit.

For example, since I use WordPress, there is a bulk of URLs for author, category and tag archives, which I don’t need.

Use the filter function in Google Spreadsheets to hide these entries (they’ll still be in the spreadsheet, just not in plain sight). Here’s how to do it.

Select cell range and click the Filter button:

Click on the little drop-down button you see in the heading row of the column where you want to apply the filter.

A new dialog will appear. Type the text that you want to filter out in the text area and click Clear. Then click OK. Here’s how it should look.

Next, add a column for each metric of your Google Analytics data. In example 1, we only collected data on Unique Pageviews, so I’m going to add only one column.

Use VLOOKUP to get the data from the relevant worksheet. One problem I ran into doing this, was that Screaming Frog gives you the full URL, while Google Analytics trims out your base domain.

This can make running a VLOOKUP a lot harder. I found an easy hack to eliminate this issue on SpreadsheetPro though. Here’s how it works.

Use the RIGHT( ) function (it works both in Excel and Google Spreadsheets) to take only a number of characters from the end of the text string in a given cell.

Since you want to cut out a certain number of characters from the beginning you need to use this (this assumes the URL you want to trim is located in cell B1, so change accordingly):

=RIGHT(B1,LEN(B1)-[number of characters to remove])

LEN( ) is another formula, which gives you the length (in characters) of a particular string.

For example, the domain I’ve been using in this example -, is exactly 30 characters long when you add “http://www.” So I would need to write the following:


This is what it looks like when I do it for all URLs in my sheet.

Now, it’s much easier to do that VLOOKUP and add your Analytics stats to the main sheet. Remember that you can Hide columns (by right-clicking on them) that you do not need for your analysis, but only use for your formulas.

Now, repeat the same process for your SharedCount data. Woo! Done with the preparation phase, and on to the analyzing section.

Step 3: Audit and analyze your content

In this step you will be performing the core analysis, learning what works, what doesn’t, what needs to be repeated, improved and/or scrapped altogether. Here is what you can do with the data you’ve collected.

Check SEO elements

The Screaming Frog report you ran earlier provides valuable information about the basics that have an effect on your search engine performance. Review the information you have to find and fix common mistakes. These are the elements you should study.

Title, Heading and Meta Description

These are immensely important to your SEO as they guide search engines, telling them what your content is about. Thus, it is important that they include your focus keyword for the given page.

Additionally, each element has nuances, which you should also aim to get right.

Title is what appears in search results. This is the place where you should target the most important and relevant keyword to your page.

Meta Description is what appears in search results as a short paragraph. Perhaps more important than stuffing it with keywords, you have to make sure it is written for humans to read. Not only you will make your visitors happy, but it also has the potential to improve your ranking in search results (because: you write a good description -> more people click on your site when they see it -> higher clickthrough rate = good signal to search engines about your content).

Your Headings is where you should think about targeting so-called long-tail keywords. Trying to rank for basic keywords that have huge search volumes (and consequently traffic) can be very challenging. On the other hand, there are hundreds, even thousands, of less competitive phrases, which can still bring in good traffic and will also help with gaining advantage on those search terms which are hard to rank for.

Title and Meta Description Length

You should always remember to keep an eye on the length of your title and description because search engines will often choose a portion of each particular field to show and truncate the rest.

How much users see also depends on the screen that they are using. Make your meta tags too long and this is what you might get.

Many opinions exist on the optimal size of these two fields. The golden middle for the title tag appears to be 50-55 characters with 70 as the absolute maximum (if you go above this, it will be truncated for sure). For the meta description, most experts agree that 150-160 characters is the optimal length.

Use conditional formatting to highlight those pages where the title and description characters are more than the optimal length. I’ve developed a color-coding system to do this. Here are the labels I use.

  • Green - 55 characters or less
  • Yellow - 56-70 characters
  • Red - 70+ characters
Content, keywords and internal links

The auditing process offers you the opportunity to look at your content from above and make sure it is interesting, engaging and relevant.

Have you mentioned target keywords enough or too much? Have you targeted long-tail keywords in your content less than three to four times per page?

Finally, make sure you have enough internal links within each page. Linking to at least two to three pages within the same site is a great tactic.

Always aim to link to deep resources, such as blog posts and content offers. Redirecting to one of your core pages (homepage, about, etc.) should only be done when absolutely necessary.

Analyze your best content

With the information from Google Analytics and SharedCount you can find out where you really hit the nail on the head.

Combining this data with additional layers of information, such as word count and search traffic, can give you incredible insight. Here are a few examples.

Ideal content length for most social shares

Using this technique you can determine how long your articles should be in order to maximize the number of social shares.

First, decide how you want to categorize length. I recommend the following labels:  

  • Less than 500 words: Short
  • 500-1000 words: Medium
  • More than 1000 words: Long

Create a new column and use a double IF formula. On my spreadsheet, it looks like this:


In the end you should get something like this:

Create another column to summarize all social shares.

Now, it’s time to create a Pivot table to combine both sets of data. In Google Spreadsheets select your entire data range and go to Data > Pivot table, a new datasheet and dialog window will appear.

Using my example, I select Group by: Length and Display: Total shares with the option Summarize by: AVERAGE:

Finally, in order to visualize your findings better, create a graph using data from the pivot table, here is what mine looks like.

Unsurprisingly, longer articles tend to get more shares on average than shorter ones. You can use the same technique to see which are read by more people (by using Unique pageviews instead of Total shares), but I’ll let you try that yourself.

Best times to publish on your site

Using the same technique you can also discover when the best time to publish your content is in order to get the most visits and/or shares. To do this, you’ll need to pull information about dates of publishing and find out which day of week each corresponds to manually.

Note: I’m sure there’s an automated way to do this. I just haven’t found it yet. If you know how to do it, please let me know in the comments.

Analyze landing page performance

Google no longer publishes the exact search terms your visitors used to get to your site, but that doesn’t mean there is no way to deduct some of this information.

First, you must register your website on Google Webmaster Tools.

Under Search Traffic > Search Queries you can learn some of the keywords you’re ranking for and how your page ranks for each.

But even more useful conclusions can be drawn from your content audit.

Ideally, you’re already using landing pages to target specific keywords.

Pull in data from Google Analytics about the number of people who enter your site from these pages (the relevant metric is called Entrances). Analyze which ones perform well and see how well they rank for the targeted keywords. Don’t forget to also use Google Webmaster Tools.

Have a look at these pages and figure out what you did well on them in terms of structure, content, etc. Collect even more data from Analytics and see how they perform in terms of bounce, average time spent on site, etc.

Step 4: Use the audit to guide your SEO strategy

The best way to make use of a content audit is to adopt a strategic approach and use your findings to create a strategy. I use four general buckets for my content:

  • Leave as-is
  • Improve
  • Consolidate
  • Remove

If you are happy with the performance of a certain piece of content, it’s up-to-date, relevant and of high-quality, leave it as it is.

If your content hasn’t been updated in a while, new information and research has emerged, or the article isn’t doing very well with the intended keywords, mark it for improvement. Since your time and resources are limited, make sure to prioritize what you’ll work on first.

Some pieces of content may be better together. If you have articles dealing with similar topics, not doing so well in search rankings, you might want to consider consolidating them into one, more in-depth resource.

For some sites with a lot of content, the best tactic may be to remove some of the content from the sitemap altogether. This way you will allow search engine bots to focus only on those pieces of content that you think can be most beneficial to pagerank.

Continue reading %How to Do a Content Audit of Your Website%

An Introduction to Functional JavaScript

Mo, 2015-05-25 20:00

You’ve heard that JavaScript is a functional language, or at least that it’s capable of supporting functional programming. But what is functional programming? And for that matter, if you’re going to start comparing programming paradigms in general, how is a functional approach different from the JavaScript that you’ve always written?

Well, the good news is that JavaScript isn’t picky when it comes to paradigms. You can mix your imperative, object-oriented, prototypal, and functional code as you see fit, and still get the job done. But the bad news is what that means for your code. JavaScript can support a wide range of programming styles simultaneously within the same codebase, so it’s up to you to make the right choices for maintainability, readability, and performance.

Functional JavaScript doesn’t have to take over an entire project in order to add value. Learning a little about the functional approach can help guide some of the decisions you make as you build your projects, regardless of the way you prefer to structure your code. Learning some functional patterns and techniques can put you well on your way to writing cleaner and more elegant JavaScript regardless of your preferred approach.

Imperative JavaScript

JavaScript first gained popularity as an in-browser language, used primarily for adding simple hover and click effects to elements on a web page. For years, that’s most of what people knew about it, and that contributed to the bad reputation JavaScript earned early on.

As developers struggled to match the flexibility of JavaScript against the intricacy of the browser document object model (DOM), actual JavaScript code often looked something like this in the real world:

[code language="js"] var result; function getText() { var someText = prompt("Give me something to capitalize"); capWords(someText); alert(result.join(" ")); }; function capWords(input) { var counter; var inputArray = input.split(" "); var transformed = ""; result = []; for (counter = 0; counter < inputArray.length; counter++) { transformed = [ inputArray[counter].charAt(0).toUpperCase(), inputArray[counter].substring(1) ].join(""); result.push(transformed); } }; document.getElementById("main_button").onclick = getText; [/code]

So many things are going on in this little snippet of code. Variables are being defined on the global scope. Values are being passed around and modified by functions. DOM methods are being mixed with native JavaScript. The function names are not very descriptive, and that’s due in part to the fact that the whole thing relies on a context that may or may not exist. But if you happened to run this in a browser inside an HTML document that defined a <button id="main_button">, you might get prompted for some text to work with, and then see the an alert with first letter of each of the words in that text capitalized.

Imperative code like this is written to be read and executed from top to bottom (give or take a little variable hoisting). But there are some improvements we could make to clean it up and make it more readable by taking advantage of JavaScript’s object-oriented nature.

Continue reading %An Introduction to Functional JavaScript%

Managing Broken Links and 404s in WordPress

Mo, 2015-05-25 19:00

Broken links create a horrible user experience. Who wants to click on a link, only to find that it goes nowhere? That the awesome article you were all set to read doesn’t actually exist?

Unfortunately, the natural decay of links (also often referred to as link rot) occurs all too often. Link rot happens for any number of reasons: domains expire, websites are abandoned, incorrect URLs are used, and websites are restructured using new URLs. And, even though best practice dictates the use of 301 redirects in the case of website restructures, not everyone sets them up. Or, if they do - they're often set up incorrectly.

This shouldn’t stop you from linking to other sites. Linking to other websites is an important component of what the web is all about. Some people tend to think that linking out to third party websites will cause visitors to leave their site, reducing their visitor stats, but linking out is exactly what the web was designed for. You shouldn’t be scared of it, rather, you should embrace the value that enriching your content offers. Providing your visitors with related content or a deeper dive on a given topic is a far better user experience than dumping users at a dead end. This is just my opinion, I know there are different schools of thought when it comes to the topic of outbound links.

So, if you can’t control other people’s websites, and you shouldn’t stop linking out to other websites, then what can you do to make sure you're not sending people to non-existent pages? Well, it’s pretty simple really: you can control your own site by performing regular link maintenance and keeping your outbound links in check.

The management of broken links is an integral part of good WordPress maintenance. And, thanks to a number of plugins and tools at our disposal, it's becoming increasingly easy to automate the process of link maintenance these days. So, without further ado, let’s take a look at some of these tools.

Continue reading %Managing Broken Links and 404s in WordPress%

Remote Control Your Mac With Node.js and Arduino

Mo, 2015-05-25 18:00

The combination of Arduinos and Node.js allows us to do a lot of unexpected things. In this article, I'll show how you can create a remote control for your Mac via Arduinos, Node.js and AppleScript.

If you are new to combining Arduinos and Node.js, I've previously covered turning on LED lights and displaying web API data on LCD text displays.

Our Arduino remote control will increase and decrease our Mac's volume, tell our Mac to play an iTunes playlist of our choosing and set it to stop whatever is playing on iTunes (which is likely to be that playlist!).

Keep in mind, this demo provides access to commands directly on your Mac - there is the potential for this to be misused or harmful if you provide too much access! Keep it for personal use rather than big corporate projects.

Setting Up Our Arduino

Ensure that you've got the StandardFirmata sketch installed on your Arduino board itself, as we'll be using the johnny-five library to send instructions to our Arduino. That will only work if you've got StandardFirmata on there first:

Our Arduino breadboard set up for this demo looks like so:

Our Server Code

Our Node.js server code is relatively short and sweet for this demo:

[code language="js"] var five = require('johnny-five'), board = new five.Board(), exec = require('child_process').exec, btn1, btn2, btn3, btn4, btn5, currentVolLevels = {}; board.on('ready', function() { console.log('Arduino board is ready!'); btn1 = new five.Button(7); btn2 = new five.Button(6); btn3 = new five.Button(5); btn4 = new five.Button(4); btn5 = new five.Button(3); btn1.on('down', function(value) { askiTunes('play playlist \"Top 25 Most Played\"'); }); btn2.on('down', function(value) { askiTunes('stop'); }); btn3.on('down', function(value) { setVolumeLevel(currentVolLevels['output volume'] + 5); }); btn4.on('down', function(value) { setVolumeLevel(currentVolLevels['output volume'] - 5); }); btn5.on('down', function(value) { toggleMute(); }); getVolumeLevels(); }); function getVolumeLevels() { exec("osascript -e 'get volume settings'", function(err, stdout, stderr) { if (!err) { var levels = stdout.split(', '); levels.forEach(function(val,ind) { var vals = val.split(':'); if (vals[1].indexOf('true') > -1) currentVolLevels[vals[0]] = true; else if (vals[1].indexOf('false') > -1) currentVolLevels[vals[0]] = false; else currentVolLevels[vals[0]] = parseInt(vals[1]); }); console.log(currentVolLevels); } }); } function setVolumeLevel(level) { console.log('Setting volume level to ' + level); exec("osascript -e 'set volume output volume " + level + "'", function() { getVolumeLevels(); }); } function toggleMute() { var muteRequest = currentVolLevels['output muted'] ? 'without' : 'with'; console.log('Toggling mute to ' + muteRequest + ' muted'); exec("osascript -e 'set volume " + muteRequest + " output muted'", function() { getVolumeLevels(); }); } function askiTunes(event, callback) { exec("osascript -e 'tell application \"iTunes\" to "+event+"'", function(err, stdout, stderr) { console.log('iTunes was just asked to ' + event + '.'); }); } [/code] That Code Explained

Now the all important part of the article - what all of that code means! Lets go over how everything fits together.

In order to interface with our Arduino board, we are using johnny-five. We start by setting up our johnny-five module and our Arduino board through that. Then we define variables to store our five buttons.

[code language="js"] var five = require('johnny-five'), board = new five.Board(), btn1, btn2, btn3, btn4, btn5, [/code]

Continue reading %Remote Control Your Mac With Node.js and Arduino%

On Our Radar: Responsive Images Are Trolling Us All

Mo, 2015-05-25 17:00

It’s been a week of putting on our best old school berets, magnifying glasses, optional fashionable pipe, and detective vest. Responsive images are difficult to comprehend. 0 is is false and 1 is true, right? 1? On Our Radar We begin with DaMarkov’s frustration towards this whole ‘responsive web design’ thing that’s been going around. […]

Continue reading %On Our Radar: Responsive Images Are Trolling Us All%

Mastering Composer – Tips and Tricks

Mo, 2015-05-25 16:00

Composer has revolutionized package management in PHP. It upped the reusability game and helped PHP developers all over the world generate framework agnostic, fully shareable code. But few people ever go beyond the basics, so this post will cover some useful tips and tricks.


Although it’s clearly defined in the documentation, Composer can (and in most cases should) be installed globally. Global installation means that instead of typing out

php composer.phar somecommand

you can just type out

composer somecommand

Continue reading %Mastering Composer – Tips and Tricks%

7 Mobile UX Mistakes You’re Probably Making Right Now

Mo, 2015-05-25 15:00

Making changes on mobile UX can be a tricky process, especially if you come from a web background. In mobile, developers have more constraints, including screen real estate, attention times, and UI control limitations. Improving the mobile experience is always a learning process full of trial and error, so this list will get you on the right track by helping you steer clear of common pitfalls.

Mistake #1. Assuming your users need to sign in

Everyone knows there are a ton of benefits to having users sign in, yet it's also a significant pain point for your users. Who doesn't get impatient having to type in the same personal data hundreds and hundreds of time for each app or service?

Most apps' solutions are to allow users to temporarily skip registration so that they can try out the app and get a sense of value.

While this method works well enough for Apple to adopt it into their User Experience Guidelines, cutting the funnel even further can have huge benefits. If registration is a pain point, why not see what happens when you remove that pain entirely?

HotelTonight, a hotel booking app, used A/B testing to create a variant where users could complete the transaction without having to create a dedicated account. Previously all users had to sign in before completing a booking.

They tracked the bounce rate, as well as completed transactions, and found that discovered that making sign-ins optional actually increased bookings by 15%.

To encourage users to sign up still, they're given the option to sign up in order to save their data to make future bookings even more painless and quick.

HotelTonight significantly decreased friction by removing a common pain point needed to generate value, then incentivized users to address those same pain points by giving them additional incentive to do so. Removing sign ins turned out to be a great decision for the app, as they were able to improve their bottom line.

Mistake #2. Assuming you need to bombard users with "value"

Two of the most common recommendations for app optimization are to include onboarding tutorials, and allow users to skip registration. Each of these is supposed to give users a great sense of value for an app, making sure they know all they can do with your app, and what they'll get out of it.

Even though these techniques can typically increase sign ins and engagements, Vevo wanted to see if removing them could possibly increase their KPIs. They hypothesized that removing tutorials would increase the amount of users that logged in and signed up.

After testing two variants: one with and one without, the results were clear. Without tutorials, 10% more users logged in, and 6% more signed up.

Vevo believed that the tutorial was unnecessary because they have enough of a brand that most users who download the app are familiar with the core value proposition of the app. They didn't need convincing.

Instead, users simply wanted to get started watching music videos as soon as possible, and the tutorial only proved to be a hinderance. Their new flow gives 2 sets of simple instructions, and off you go.

Trying to convince users of possible value through a tutorial can impede them from getting to the actual value in an app. Be sure that your app isn't doing the same.

Mistake #3. Copying other app experiences

The above two blunders come from implementing generally accepted good practices without questioning if they could be improved. While online advice is generally a good starting point, each app and product is unique in their goals, audience, value, functionality, etc. What works for someone else doesn't mean it will automatically have the same effects for you.

Instead of implementing example tests found online, draw your ideas from customer feedback. Create surveys, read reviews, and gather as much qualitative data as you can as to what users would like to see changed. Using that data, create new test ideas specific to your app, then use A/B testing to determine their effects on your audience.

Instead of just following in other's footsteps, teams should strive to use A/B testing as a powerful tool to gain quantitative data about whether a change improves your KPIs or decreases them.

Mistake #4. Underestimating how long it can take to update a mobile app

It's easy to forget how much longer any change on mobile takes when compared to web. On the web, information is stored on company controlled servers so making changes is a breeze. If a mistake is pushed into production, devs can quickly revert to a previous state, or make adjustments and deploy them instantly.

On mobile, making changes is not so easy.

Since an app is hosted on a client's phone, any updates need to endure a long and arduous journey before making its way into a client's hands. After development, any changes are subject to an often lengthy app store review process.

In addition, updates also generally require unwilling users to manually update the apps on their phones, which can be a challenge all in itself. This makes it all the more important that what you push out has been thoroughly tested and proven to improve the experience, rather than detract from it.

To help decrease the chances of an embarrassing blunder, use beta testing, or A/B split testing to get feedback on how your users respond. Beta testing allows devs to get qualitative feedback through the form of surveys, emails, comments, or crash reports. A/B testing helps you determine quantitative cause/effect relationships to show you how each change is affecting your core metrics.

Fixing mistakes on mobile can be an excruciating process, but integrating testing as part of your development cycle can help alleviate much of the risk.

If a mistake does occur, you can also use feature flagging and instant updates to create hotfixes and quickly quell any impending disasters.

Continue reading %7 Mobile UX Mistakes You’re Probably Making Right Now%

CanCanCan: The Rails Authorization Dance

Mo, 2015-05-25 14:00

Recently, I have written an overview of some popular authentication solutions for Rails. However, in many cases, having authentication by itself is not enough - you probably need an authorization mechanism to define access rules for various users. Is there an existing solution, preferably one that isn't very complex, but is still flexible?

Meet CanCanCan, a flexible authentication solution for Rails. This project started as CanCan authored by Ryan Bates, the creator of RailsCasts. However, a couple of years ago this project became inactive, so members of the community decided to create CanCanCan, a continuation of the initial solution.

In this article, I'll integrate CanCanCan into a simple demo project, defining access rules, looking at possible options, and discussing how CanCanCan can reduce code duplication. After reading this post, you will have a strong understanding of CanCanCan's basic features and be ready to utilize it in real projects.

The source code can be found on GitHub.

A working demo is available at

Preparing Playground Planning and Laying the Foundation

To start hacking on CanCanCan we have to prepare a playground for our experiments first. I am going to call my app iCan because I can (hee!):

[ruby] $ rails new iCan -T [/ruby]

I am going to stick with Rails 4.1 but CanCanCan is compatible with Rails 3, as well.

The demo application will present users with a list of projects, both ongoing and finished. Users with different roles will have different level of access:

  • Guests won't have any access to the projects. They will only see the main page of the site.
  • Users will be able to see only the ongoing projects. They won't be able to modify or delete anything.
  • Moderators will have access to all projects with the ability to edit the ongoing ones.
  • Admins will have full access.

Our task will be to introduce those roles and define proper access rules for them.

Continue reading %CanCanCan: The Rails Authorization Dance%

What Freelancer Schedules Actually Look Like

Sa, 2015-05-23 16:00

The life of a freelancer can be something of a mystery. Where do they go? What do they do? Do they even put on pants? To demystify the daily affairs of these laptop wielding enigmas, we've asked three successful freelancers to share their schedules with us. They have also offered tips on how to structure […]

Continue reading %What Freelancer Schedules Actually Look Like%

The Final Steps to Mastering JavaScript’s “this” Keyword

Fr, 2015-05-22 18:00

In a previous article we learned the fundamentals of using JavaScript’s this keyword properly. We saw that the crucial factor in determining what this refers to, is to find out the current execution context. However, this task can be a bit tricky in situations where the context gets changed in a way we don’t expect. In this article I will highlight when this might happen and what we can do to remedy it.

Fixing Common Issues

In this section we’ll explore some of the most common issues arising from the use of the this keyword and we’ll learn how to fix them.

1. Using this in Extracted Methods

One of the most common mistakes that people make is when trying to assign an object’s method to a variable and expecting that this will still point to the original object. As we can see from the following example, that simply doesn’t work.

[code language="js"] var car = { brand: "Nissan", getBrand: function(){ console.log(this.brand); } }; var getCarBrand = car.getBrand; getCarBrand(); // output: undefined [/code] JS Bin

Even though getCarBrand appears to be a reference to car.getBrand(), in fact, it’s just another reference to getBrand() itself. We already know that the call-site is what matters in determining the context, and here, the call-site is getCarBrand(), which is a plain and simple function call.

To prove that getCarBrand points to a baseless function (one which isn’t bound to any specific object), just add alert(getCarBrand); to the bottom of the code and you’ll see the following output:

[code language="js"] function(){ console.log(this.brand); } [/code]

getCarBrand holds just a plain function, which is no longer a method of the car object. So, in this case, this.brand actually translates to window.brand, which is, of course, undefined.

If we extract a method from an object, it becomes a plain function again. Its connection to the object is severed, and it no longer works as intended. In other words, an extracted function is not bound to the object it was taken from.

So how can we remedy this? Well, if we want to keep the reference to the original object, we need to explicitly bind the getBrand() function to the car object when we assign it to the getCarBrand variable. We can do this by using the bind() method.

[code language="js"] var getCarBrand = car.getBrand.bind(car); getCarBrand(); // output: Nissan [/code]

Now, we get the proper output, because we successfully redefine the context to what we want it to be.

Continue reading %The Final Steps to Mastering JavaScript’s “this” Keyword%

CRUD (Create Read Update Delete) in a Laravel App

Fr, 2015-05-22 16:00

In the previous part, we’ve bootstrapped our Laravel CRUD application by creating the database, some controllers, basic routes and simple views. In this part, we’ll wrap things up and implement proper CRUD.

If you’d like to follow along through this interactive walk through Laravel’s docs, please catch up by reading the first part now.

Creating A Record

Continuing right where we left off, let’s create the page where we’ll actually perform this action. In our TasksController, let’s return a view like this:

public function create() { return view('tasks.create'); }

And now, in our views directory, let’s create tasks/create.blade.php, and enter some starter content:

@extends('layouts.master') @section('content') <h1>Add a New Task</h1> <p class="lead">Add to your task list below.</p> <hr> @stop

Continue reading %CRUD (Create Read Update Delete) in a Laravel App%

Creating a Barcode and Metadata Reader in iOS

Fr, 2015-05-22 15:00

More than ever, users expect iOS apps to be connected. Data flows through iOS apps at a blistering pace. It can come via an API, through messaging or communication with servers. Scanning the many types of available barcodes is another form of data apps can benefit from.

As of iOS 7, Cocoa Touch supports reading data from barcodes and other types of metadata natively. In this tutorial, I’ll show how to set up barcode scanning in iOS apps through the built in frameworks available.

Continue reading %Creating a Barcode and Metadata Reader in iOS%

Graph Algorithms in Ruby

Fr, 2015-05-22 14:00

A lot (read: most) of Rubyists are focused on one aspect of software engineering: web development. This isn't necessarily a bad thing. The web is growing at an incredible rate and is definitely a rewarding (monetarily and otherwise) field in which to have expertise. However, this does not mean that Ruby is good just for web development.

The standard repertoire of algorithms is pretty fundamental to computer science and having a bit of experience with them can be incredibly beneficial. In this article, we'll go through some of the most basic graph algorithms: depth first search and breadth first search. We'll look at the ideas behind them, where they fit in with respect to applications, and their implementations in Ruby.


Before we can really get going with the algorithms themselves, we need to know a tiny bit about graph theory. If you've had graph theory in any form before, you can safely skip this section. Basically, a graph is a group of nodes with some edges between them (e.g. nodes representing people and edges representing relationships between the people). What makes graph theory special is that we don't particularly care about the Euclidean geometrical structure of the nodes and edges. In other words, we don't care about the angles they form. Instead, we care about the "relationships" that these edges create. This stuff is a bit hand-wavy at the moment, but it'll become clear as soon as we look at some concrete examples:

Alright, so there: we have a graph. But, what if we want a structure that can represent the idea that "A is related to B but B isn't related to A"? We can have directed edges in the graph.

Now, there is a direction to go with each relationship. Of course, we can create a directed graph out of an undirected graph by replacing each undirected edge with two directed edges going opposite ways.

The Fundamental Problem

Say we're given a given a directed graph (G) and two nodes: (S) and (T) (usually referred to as the source and terminal). We want to figure out whether there is a path between (S) (T). Can we can get to (T) by the following the edges (in the right direction) from (S) to (T)? We're also interested in what nodes would be traversed in order to complete this path.

There are two different solutions to this problem: depth first search and breadth first search. Given the names and a little bit of imagination, it's easy to guess the difference between these two algorithms.

Continue reading %Graph Algorithms in Ruby%

Video: Scalable Backgrounds in CSS

Fr, 2015-05-22 02:33

In this short video, I'll show you how to create a background container that scales seamlessly to fit any browser size.

Continue reading %Video: Scalable Backgrounds in CSS%

Why Bring a DevOps Spirit to Non-Engineers?

Fr, 2015-05-22 01:50

This article was sponsored by PagerDuty. Thank you for supporting the sponsors who make SitePoint possible!

In college, I was part of a student body that managed an intranet portal. One of our biggest issues was recruitment. My team of about 35 first organized a written test for over 400 people, of which 50 made the shortlist. Then we painstakingly interviewed these applicants to arrive at our final selections of about 15.

Those involved in the selection usually collaborated through email, and the process took a few back to back nights. At the outset, it does look like a tedious process (and it definitely is!). Do you think it can be improved? Keep your answer in mind as I will revisit this question later in the post.

The Good Ol’ Days

Email was an important invention. Although it entered our lives slowly, email eventually became crucial. Over time, we used email for all sorts of tasks in addition to the basic purpose of communication: as a way to manage tasks, as a de facto forum, as a place for reminders, or even as cloud storage (have you ever emailed yourself an important document?).

In the field of web development, email found a lot of additional uses. Development teams communicated through email lists. Code patches were shared over email too. With the emergence of new technologies like website downtime monitors and CRM software, those alerts were shared over email too.

Email providers evolved to help us manage the incoming mail. They provided us with tags and filters to better manage our inbox. However, this was only a temporary stopgap, and we soon reached a point where email became overwhelming — there were too many emails and too little time.

[caption id="attachment_102221" align="alignright" width="523"] Source - Grammarly[/caption] Enter DevOps

After outlining the cracks that develop when we try to manage everything over email, let’s answer the question: Why did we overuse email in the first place?

I believe the answer can be found through an analogy: the use of smartphones in our daily lives. Smartphones are now devices we can’t live without. Whether it’s connecting to friends, taking selfies, paying bills, setting alarms, managing to-do lists, ordering food, booking tickets, shopping — modern smartphones do it all, every day.

As humans, we prefer using a single handheld device to perform all our actions because we prefer the idea of singularity — a single medium that helps us in all our tasks. That is the reason we used email for everything — managing all activity through a medium on a single screen is simpler that ranging over different media.

To answer our original question, email served as a medium to connect everything else and joined them together. Looking at the world of web development, we used email to connect the different parts of development of a web application — from understanding the requirements through discussions, coding and testing, automating the code deployment process to analyzing the feedback of the end users. This is where DevOps comes in.

DevOps is a portmanteau of the words “developer” and “operations”. The idea of DevOps is to merge the whole development and system administration process into one that functions seamlessly and efficiently. At its center is proper collaboration between teams and processes through communication and the use of the right tools.

Continue reading %Why Bring a DevOps Spirit to Non-Engineers?%

Spider: An Exciting Alternative to JavaScript

Fr, 2015-05-22 00:48

Spider is one of the new languages that try to improve our codes by providing more reliability. Some could certainly describe it as CoffeeScript with JavaScript syntax, but such description would fail to emphasize the real benefits of Spider. Spider contains a lot more unique and interesting concepts than most alternatives like CoffeeScript. While the latter is certainly more mature than Spider, we get some nice options by choosing the language named after the eight legged arthropods. If we just want to experiment a little bit with yet another language, search for a trustworthy JavaScript alternative, or try to write less and do more, Spider seems to be a good candidate. Basic Concepts Spider is designed around its slogan, It's just JavaScript, but better. This means we won't get a compilation type system or type checker of any kind. We also won't miss our beloved C-style syntax with curly brackets for blocks, round brackets for function calls, and square brackets for arrays. Finally we also don't see a custom VM on top of JavaScript or anything else to break compatibility with existing JavaScript code. Yes, this is really JavaScript. The creators of Spider realized that there is no point in debating static versus dynamic languages. Each one has their advantages and disadvantages. The reason for choosing the full dynamic side with Spider is simple: JavaScript is already dynamic and interacting with otherwise dynamic code gets a lot simpler when the language embraces a dynamic type system. There are two more important things that should be mentioned here:

  1. Spider is compiled to JavaScript (i.e. transpiled)
  2. Some features are inspired from languages like Go, C#, and CoffeeScript
The files are not transpiled to older versions of JavaScript, but to the most recent standard ECMAScript 6. To guarantee support across most browsers, Spider uses Google's Traceur to generate ECMAScript 5 compatible files. What this means is that Spider is already taking advantage of future improvements, with the current output being backward compatible.

Continue reading %Spider: An Exciting Alternative to JavaScript%

Boost Your WordPress and Drupal Performance with Pantheon

Do, 2015-05-21 22:40

This post was sponsored by Pantheon. Thank you for supporting the sponsors who make SitePoint possible!

Consider the typical tasks involved when deploying your WordPress or Drupal website to a new web host…

  1. Sign-up and create a new environment.
  2. If you’re using a dedicated or virtual server, install and/or configure a web server, PHP, MySQL and other dependencies.
  3. Create a new database with a user ID and password.
  4. Upload several megabytes of application code.
  5. Edit the application’s configuration parameters.
  6. Run the installer process.
  7. Upload, install and configure third-party themes and plugins.
  8. Add your content.
  9. Test. Swear. Hit your keyboard. Fix the problems. Repeat testing again.
  10. Redo the whole process for your test, staging and production environments.

And then your problems really start…

  • Updates can be difficult to deploy everywhere
  • A traffic spike caused by a popular article or advertising campaign can bring the server to a halt at the worst possible moment
  • A DoS attack can be catastrophic for every site hosted on the same environment
  • The website is difficult to scale as you grow
  • Hardware and software updates can cause outages or compatibility problems.

The process may be manageable for a couple of installations but consider hosting a few dozen websites – or thousands. Managing multiple WordPress or Drupal sites is time-consuming, tedious, error-prone and prevents you away working on tasks which add real value.

Can Cloud Hosting Help?

To some extent, yes. However, cloud hosting typically requires one or more separate virtual machines for every site. VMs are large, expensive and still rely on significant hardware resources to scale effectively. The traditional approach to scaling also requires considerable manual intervention by systems administrators or DevOps. Modern Infrastructure-as-a-Service providers such as AWS and Rackspace make it easy to provision new VMs to handle additional workload but someone, somewhere needs to stitch those additional servers together. Deployment takes time – and that may be too late for your traffic spike.

Step One to Saving Your Sanity: Use Version Control

If you’re not using version control it’s time to start. Git is a great choice but any solution is better than none. Version control can be used to create a stable deployment process to improve your workflow. Ideally:
Team members will have access to their own, separate development environments which allow them to update or create new features on separate code branches.

Content – such as your WordPress pages, posts and images – are synchronized from the live environment to all development and staging systems. The team can then work against an accurate snapshot of reality which is critical for a full understanding of the system.
Automated quality-assurance tests ensure new code is tested prior to deployment. It should be impossible for problematic features to reach the live server.

Step Two: Consider a Website Management Platform

A what? Website Management Platforms are a new concept. They’re similar to cloud-based Platform-as-a-Service (PaaS) hosts such as Heroku or Cloud Foundry. However, a WMP is purpose-built for a specific application such as WordPress and Drupal rather than a development environment such as PHP or Ruby.

The leader in this field is Pantheon, a company that implements hosting and scaling in a new way. Rather than rely on heavy VMs, sites are constructed on lightweight containers abstracted from the OS and hardware. Only the application is included – not the whole guest OS, PHP, MySQL or other dependencies. WordPress and Drupal applications can then be managed from Pantheon’s dashboard.

Pantheon has created an infrastructure named the “Runtime Matrix”. This executes your site’s code across hundreds of powerful servers which serve millions of containers. Intelligent routing, load balancing and advanced caching and security services are included as standard. The service also includes Pantheon Content Base which manages databases, files and version control more effectively.

Pantheon’s Website Management Platform has a number of advantages including:

1. Fast Provisioning

Containers are provisioned using software which means they can be added or removed very quickly. Idle sites, such as development containers, are effectively deactivated until a new request is made. The largest websites in the world — like Google, Facebook and Twitter — are managed through software, not by manually adding and configuring new VM’s, servers and services. Software-based provisioning removes human error and greatly increases the speed of new services coming on line. A Website Management Platform quickly provisions all the required services so sites can scale out quickly to handle peak loads with minimal human intervention.

2. High Availability

High availability means guaranteed uptime even when services fail. One compelling aspect of Website Management Platforms is the ability to handle redirection of traffic and service requests to known working services for code execution, content requests, db…. The multi-tenant, high-availability values introduced by, gmail and Heroku are now finding their way into website management, where a dedicated team of platform engineers deliver services that most companies would not be able to build and manage on their own.

Continue reading %Boost Your WordPress and Drupal Performance with Pantheon%

How to Access Member Functions in Polymer Elements

Do, 2015-05-21 21:00

Here’s the source code for my project.

It’s my first time using Polymer, and I’m certainly getting snagged in a few spots. Most recently, it was trying to return member functions of a Polymer object that I created. Took me forever to figure this out, so I wanted to share it with you in this tutorial.

Sidenote: you can also search for my more detailed write-up on Web Components here.

The Wrong Way

I have a Web Component which looks like this:

If I try to access it by its ID…

var temp = document.querySelector("#radial-button-template"); // returns

I cannot access any of the functions inside it. They return undefined. So if I tried this:

var temp = document.querySelector("#radial-button-template"); temp.getFirstElement // returns undefined Why Is This Happening?

This is due to the Shadow DOM’s encapsulation. It is both a gift and a curse. In this case, I am accessing the element, and not the shadowRoot, which will expose the public methods attached to the Shadow DOM object.

In the next step, you'll see how I can access the member functions in my custom element, as well as how I can return nodes that lie even deeper in my web component.

Rob Dobson of Google’s Polymer team explains this well in this blog post. Eric Bidleman goes into even more detail in his advanced Shadow DOM article. I strongly suggest taking the time to read these over to better understand how this version of the DOM works.

Continue reading %How to Access Member Functions in Polymer Elements%

Visual Studio Community 2015: Adding Email and Contact Pages

Do, 2015-05-21 19:00

This article was sponsored by Microsoft. Thank you for supporting the sponsors who make SitePoint possible.

Welcome back to our series of articles using Microsoft’s modern IDE: Visual Studio Community 2015 to quickly design and build an attractive, functional site for a client. If you missed the last installment, check it out here [LINK when article published].

Now that Andy has the website front page available, he can begin building out the site a little more. This will involve implementing an email signup form, as well as contact and about pages.

We’ll start with an email signup form then move into creating some additional pages. The email signup form will be front and center on the homepage. It will be placed on the right side of the jumbotron, where there is some empty space available.

For the email signup form, we’ll use a form from MailChimp. Andy is using his client’s MailChimp account and will use an existing list for the homepage. Everyone that signs up on the homepage will go into this list.

Our signup form will be designed to look like this:

Getting Code from MailChimp

Once logged into MailChimp, select the list you want people added to. Click Signup Forms. Click embedded forms. Classic style is fine. The client wants to capture first name and email address. MailChimp actually has these as the default so we can leave things as they are.

Your screen in MailChimp should look like the following:

Copy the HTML. This is what we’ll paste into the jumbotron. In the jumbotron under this line:

[code language="html"]

Learn more »


Add the MailChimp form code. If you run the app, it should look like the following:

Obviously this isn’t what we want it to look like but this is good a starting point. From here, we’ll format the form using Bootstrap and get everything aligned properly.

Modifying Signup Form With Bootstrap

With the current formatting, we’ve lost our responsive design. The site title needs to be left while the signup form goes to the right. They should also be on the same row. Bootstrap will help us get things back in order.

We can add a couple of columns. Surround the jumbotron with a and the MailChimp code with a . This formatting means the site title text will take up 2/3 of the jumbotron while the signup format takes up 1/3.

Your code should look like the following:

Remember, Bootstrap is using a 12 grid system. 8 + 4 = 12 and you can see from these numbers how we get 2/3 and 1/3.

If you run the site, you’ll see we have two columns and our responsive web design is back.

Next, we’ll begin polishing the signup form UI so it blends in better with the site.

UI Polishing

Rather than going through lots of little steps, it will be easier to display what the finished MailChimp modifications should look like. Then we can step through. Replace your current MailChimp code with the following:

[code language="html"] Enter your name and email for
your first FREE lesson! First Name Email [/code]

I’ve added a few lines of space in the code to better help break up the form for discussion.

There are a few custom classes that we’ll create, which include not-bold, transparent-background, and soft-border-radius. We define these classes in site.css.

Since most of the MailChimp code is the same as what we copied from MailChimp, let's discuss what’s going on with these custom classes.

not-bold is defined as follows:

[code language="css"] .not-bold { font-weight:normal; } [/code]

It simply removes bold letters. This is used to de-emphasize the form field labels. Our call to action is bolded. If the form field labels are also bolded, the eye will struggle a little to figure out where to focus. Worse case scenario: people simply give up and bypass our signup form.

The screenshot below shows the use of .not-bold

transparent-background provides semi transparency to the form background and input fields, providing a little more depth to our design. It is defined as:

[code language="css"] .transparent-background { background-color: rgba(0, 0, 0, 0.25) } [/code]

rgba simply means red, green, blue and alpha. Alpha sets opacity. The lower this value, the more transparent. Values can range from 0 to 1.

soft-border-radius makes our form and input fields express a little elegant detail with rounded corners. This class is defined as:

[code language="css"] .soft-border-radius { border-radius: 10px; } [/code]

Finally, we have a full-width blue button. .max-width helps us here. Not only does the blue provide great contrast and brings the eye right to it, but the large size makes it irresistible for clicking. .max-width is defined as:

[code language="css"] .max-width { width:100%; } [/code]

Adding the above classes to site.css and pasting in the above form code should result in the same signup form as shown above.

Continue reading %Visual Studio Community 2015: Adding Email and Contact Pages%

Speed up Development Using the WordPress Plugin Boilerplate

Do, 2015-05-21 18:00

A low learning curve into WordPress plugin development means that there is no one definitive way to build a plugin. A plugin can be as simple as a single file like Hello Dolly, or it can be structured as complex as needed to cater for various requirements and functionality. The WordPress plugin boilerplate aims to provide a standardized, high quality foundation to build your next awesome plugin.

In this first part of the series, we'll take a deep look into the boilerplate, including how the files and folders are structured as well as code organisation of the boilerplate.

Continue reading %Speed up Development Using the WordPress Plugin Boilerplate%