emGee Software Solutions Custom Database Applications

Share this

Web Design

An update to referral source URLs for Google Images

Google Webmaster Central Blog - Tue, 07/17/2018 - 10:18
Every day, hundreds of millions of people use Google Images to visually discover and explore content on the web. Whether it be finding ideas for your next baking project, or visual instructions on how to fix a flat tire, exploring image results can sometimes be much more helpful than exploring text.
Updating the referral sourceFor webmasters, it hasn't always been easy to understand the role Google Images plays in driving site traffic. To address this, we will roll out a new referrer URL specific to Google Images over the next few months. The referrer URL is part of the HTTP header, and indicates the last page the user was on and clicked to visit the destination webpage.
If you create software to track or analyze website traffic, we want you to be prepared for this change. Make sure that you are ingesting the new referer URL, and attribute the traffic to Google Images. The new referer URL is: https://images.google.com.
If you use Google Analytics to track site data, the new referral URL will be automatically ingested and traffic will be attributed to Google Images appropriately. Just to be clear, this change will not affect Search Console. Webmasters will continue to receive an aggregate list of top search queries that drive traffic to their site.
How this affects country-specific queriesThe new referer URL has the same country code top level domain (ccTLD) as the URL used for searching on Google Images. In practice, this means that most visitors worldwide come from images.google.com. That's because last year, we made a change so that google.com became the default choice for searchers worldwide. However, some users may still choose to go directly to a country specific service, such as google.co.uk for the UK. For this use case, the referer uses that country TLD (for example, images.google.co.uk).
We hope this change will foster a healthy visual content ecosystem. If you're interested in learning how to optimize your pages for Google Images, please refer to the Google Image Publishing Guidelines. If you have questions, feedback or suggestions, please let us know through the Webmaster Tools Help Forum.
Posted by Ashutosh Agarwal, Product Manager, Google Images
Categories: Web Design

Set Up Routing in PHP Applications Using the Symfony Routing Component

Tuts+ Code - Web Development - Fri, 07/13/2018 - 07:00

Today, we'll go through the Symfony Routing component, which allows you to set up routing in your PHP applications.

What Is the Symfony Routing Component?

The Symfony Routing Component is a very popular routing component which is adapted by several frameworks and provides a lot of flexibility should you wish to set up routes in your PHP application.

If you've built a custom PHP application and are looking for a feature-rich routing library, the Symfony Routing Component is more than a worth a look. It also allows you to define routes for your application in the YAML format.

Starting with installation and configuration, we'll go through real-world examples to demonstrate a variety of options the component has for route configuration. In this article, you'll learn:

  • installation and configuration
  • how to set up basic routes
  • how to load routes from the YAML file
  • how to use the all-in-one router
Installation and Configuration

In this section, we're going to install the libraries that are required in order to set up routing in your PHP applications. I assume that you've installed Composer in your system as we'll need it to install the necessary libraries that are available on Packagist.

Once you've installed Composer, go ahead and install the core Routing component using the following command.

$composer require symfony/routing

Although the Routing component itself is sufficient to provide comprehensive routing features in your application, we'll go ahead and install a few other components as well to make our life easier and enrich the existing core routing functionality.

To start with, we'll go ahead and install the HttpFoundation component, which provides an object-oriented wrapper for PHP global variables and response-related functions. It makes sure that you don't need to access global variables like $_GET, $_POST and the like directly.

$composer require symfony/http-foundation

Next, if you want to define your application routes in the YAML file instead of the PHP code, it's the YAML component that comes to the rescue as it helps you to convert YAML strings to PHP arrays and vice versa.

$composer require symfony/yaml

Finally, we'll install the Config component, which provides several utility classes to initialize and deal with configuration values defined in the different types of file like YAML, INI, XML, etc. In our case, we'll use it to load routes from the YAML file.

$composer require symfony/config

So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.

<?php require_once './vendor/autoload.php'; // application code ?>Set Up Basic Routes

In the previous section, we went through the installation of the necessary routing components. Now, you're ready to set up routing in your PHP application right away.

Let's go ahead and create the basic_routes.php file with the following contents.

<?php require_once './vendor/autoload.php'; use Symfony\Component\Routing\Matcher\UrlMatcher; use Symfony\Component\Routing\RequestContext; use Symfony\Component\Routing\RouteCollection; use Symfony\Component\Routing\Route; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\Routing\Generator\UrlGenerator; use Symfony\Component\Routing\Exception\ResourceNotFoundException; try { // Init basic route $foo_route = new Route( '/foo', array('controller' => 'FooController') ); // Init route with dynamic placeholders $foo_placeholder_route = new Route( '/foo/{id}', array('controller' => 'FooController', 'method'=>'load'), array('id' => '[0-9]+') ); // Add Route object(s) to RouteCollection object $routes = new RouteCollection(); $routes->add('foo_route', $foo_route); $routes->add('foo_placeholder_route', $foo_placeholder_route); // Init RequestContext object $context = new RequestContext(); $context->fromRequest(Request::createFromGlobals()); // Init UrlMatcher object $matcher = new UrlMatcher($routes, $context); // Find the current route $parameters = $matcher->match($context->getPathInfo()); // How to generate a SEO URL $generator = new UrlGenerator($routes, $context); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, )); echo '<pre>'; print_r($parameters); echo 'Generated URL: ' . $url; exit; } catch (ResourceNotFoundException $e) { echo $e->getMessage(); }

Setting up routing using the Symfony Routing component usually goes through a series of steps as listed below.

  • Initialize the Route object for each of your application routes.
  • Add all Route objects to the RouteCollection object.
  • Initialize the RequestContext object which holds the current request context information.
  • Initialize the UrlMatcher object by passing the RouteCollection object and the RequestContext object.
Initialize the Route Object for Different Routes

Let's go ahead and define a pretty basic foo route.

$foo_route = new Route( '/foo', array('controller' => 'FooController') );

The first argument of the Route constructor is the URI path, and the second argument is the array of custom attributes that you want to return when this particular route is matched. Typically, it would be a combination of the controller and method that you would like to call when this route is requested.

Next, let's have a look at the parameterized route.

$foo_placeholder_route = new Route( '/foo/{id}', array('controller' => 'FooController', 'method'=>'load'), array('id' => '[0-9]+') );

The above route can match URIs like foo/1, foo/123 and similar. Please note that we've restricted the {id} parameter to numeric values only, and hence it won't match URIs like foo/bar since the {id} parameter is provided as a string.

Add All Route Objects to the RouteCollection Object

The next step is to add route objects that we've initialized in the previous section to the RouteCollection object.

$routes = new RouteCollection(); $routes->add('foo_route', $foo_route); $routes->add('foo_placeholder_route', $foo_placeholder_route);

As you can see, it's pretty straightforward as you just need to use the add method of the RouteCollection object to add route objects. The first argument of the add method is the name of the route, and the second argument is the route object itself.

Initialize the RequestContext Object

Next, we need to initialize the RequestContext object, which holds the current request context information. We'll need this object when we initialize the UrlMatcher object as we'll go through it in a moment.

$context = new RequestContext(); $context->fromRequest(Request::createFromGlobals());Initialize the UrlMatcher Object

Finally, we need to initialize the UrlMatcher object along with routes and context information.

// Init UrlMatcher object $matcher = new UrlMatcher($routes, $context);

Now, we have everything we could match our routes against.

How to Match Routes

It's the match method of the UrlMatcher object which allows you to match any route against a set of predefined routes.

The match method takes the URI as its first argument and tries to match it against predefined routes. If the route is found, it returns custom attributes associated with that route. On the other hand, it throws the ResourceNotFoundException exception if there's no route associated with the current URI.

$parameters = $matcher->match($context->getPathInfo());

In our case, we've provided the current URI by fetching it from the $context object. So, if you're accessing the http://your-domain/basic_routes.php/foo URL, the $context->getPathInfo() returns foo, and we've already defined a route for the foo URI, so it should return us the following.

Array ( [controller] => FooController [_route] => foo_route )

Now, let's go ahead and test the parameterized route by accessing the http://your-domain/basic_routes.php/foo/123 URL.

Array ( [controller] => FooController [method] => load [id] => 123 [_route] => foo_placeholder_route )

It worked if you can see that the id parameter is bound with the appropriate value 123.

Next, let's try to access a non-existent route like http://your-domain/basic_routes.php/unknown-route, and you should see the following message.

No routes found for "/unknown-route".

So that's how you can find routes using the match method.

Apart from this, you could also use the Routing component to generate links in your application. Provided RouteCollection and RequestContext objects, the UrlGenerator allows you to build links for specific routes.

$generator = new UrlGenerator($routes, $context); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, ));

The first argument of the generate method is the route name, and the second argument is the array that may contain parameters if it's the parameterized route. The above code should generate the /basic_routes.php/foo/123 URL.

Load Routes From the YAML File

In the previous section, we built our custom routes using the Route and RouteCollection objects. In fact, the Routing component offers different ways you could choose from to instantiate routes. You could choose from various loaders like YamlFileLoader, XmlFileLoader, and PhpFileLoader.

In this section, we'll go through the YamlFileLoader loader to see how to load routes from the YAML file.

The Routes YAML File

Go ahead and create the routes.yaml file with the following contents.

foo_route: path: /foo defaults: { controller: 'FooController::indexAction' } foo_placeholder_route: path: /foo/{id} defaults: { controller: 'FooController::loadAction' } requirements: id: '[0-9]+'An Example File

Next, go ahead and make the load_routes_from_yaml.php file with the following contents.

<?php require_once './vendor/autoload.php'; use Symfony\Component\Routing\Matcher\UrlMatcher; use Symfony\Component\Routing\RequestContext; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\Routing\Generator\UrlGenerator; use Symfony\Component\Config\FileLocator; use Symfony\Component\Routing\Loader\YamlFileLoader; use Symfony\Component\Routing\Exception\ResourceNotFoundException; try { // Load routes from the yaml file $fileLocator = new FileLocator(array(__DIR__)); $loader = new YamlFileLoader($fileLocator); $routes = $loader->load('routes.yaml'); // Init RequestContext object $context = new RequestContext(); $context->fromRequest(Request::createFromGlobals()); // Init UrlMatcher object $matcher = new UrlMatcher($routes, $context); // Find the current route $parameters = $matcher->match($context->getPathInfo()); // How to generate a SEO URL $generator = new UrlGenerator($routes, $context); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, )); echo '<pre>'; print_r($parameters); echo 'Generated URL: ' . $url; exit; } catch (ResourceNotFoundException $e) { echo $e->getMessage(); }

The only thing that's different in this case is the way we initialize routes!

$fileLocator = new FileLocator(array(__DIR__)); $loader = new YamlFileLoader($fileLocator); $routes = $loader->load('routes.yaml');

We've used the YamlFileLoader loader to load routes from the routes.yaml file instead of initializing it directly in the PHP itself. Apart from that, everything is the same and should produce the same results as that of the basic_routes.php file.

The All-in-One Router

Lastly in this section, we'll go through the Router class, which allows you to set up routing quickly with fewer lines of code.

Go ahead and make the all_in_one_router.php file with the following contents.

<?php require_once './vendor/autoload.php'; use Symfony\Component\Routing\RequestContext; use Symfony\Component\Routing\Router; use Symfony\Component\HttpFoundation\Request; use Symfony\Component\Routing\Generator\UrlGenerator; use Symfony\Component\Config\FileLocator; use Symfony\Component\Routing\Loader\YamlFileLoader; use Symfony\Component\Routing\Exception\ResourceNotFoundException; try { $fileLocator = new FileLocator(array(__DIR__)); $requestContext = new RequestContext(); $requestContext->fromRequest(Request::createFromGlobals()); $router = new Router( new YamlFileLoader($fileLocator), 'routes.yaml', array('cache_dir' => __DIR__.'/cache'), $requestContext ); // Find the current route $parameters = $router->match($requestContext->getPathInfo()); // How to generate a SEO URL $routes = $router->getRouteCollection(); $generator = new UrlGenerator($routes, $requestContext); $url = $generator->generate('foo_placeholder_route', array( 'id' => 123, )); echo '<pre>'; print_r($parameters); echo 'Generated URL: ' . $url; exit; } catch (ResourceNotFoundException $e) { echo $e->getMessage(); }

Everything is pretty much the same, except that we've instantiated the Router object along with the necessary dependencies.

$router = new Router( new YamlFileLoader($fileLocator), 'routes.yaml', array('cache_dir' => __DIR__.'/cache'), $requestContext );

With that in place, you can straight away use the match method of the Router object for route mapping.

$parameters = $router->match($requestContext->getPathInfo());

Also, you will need to use the getRouteCollection method of the Router object to fetch routes.

$routes = $router->getRouteCollection();Conclusion

Go ahead and explore the other options available in the Routing component—I would love to hear your thoughts!

Today, we explored the Symfony Routing component, which makes implementation of routing in PHP applications a breeze. Along the way, we created a handful of examples to demonstrate various aspects of the Routing component. 

I hope that you've enjoyed this article, and feel free to post your thoughts using the feed below!

Categories: Web Design

How to Start a Jekyll Blog on GitHub Pages for Free

Static website generators are increasingly popular these days. They make it possible to run a website without maintaining a database and a server. You also don’t have to worry...

The post How to Start a Jekyll Blog on GitHub Pages for Free appeared first on Onextrapixel.

Categories: Web Design

Using page speed in mobile search ranking

Google Webmaster Central Blog - Mon, 07/09/2018 - 03:09

Update July 9, 2018: The Speed Update is now rolling out for all users.

People want to be able to find answers to their questions as fast as possible — studies show that people really care about the speed of a page. Although speed has been used in ranking for some time, that signal was focused on desktop searches. Today we’re announcing that starting in July 2018, page speed will be a ranking factor for mobile searches.

The “Speed Update,” as we’re calling it, will only affect pages that deliver the slowest experience to users and will only affect a small percentage of queries. It applies the same standard to all pages, regardless of the technology used to build the page. The intent of the search query is still a very strong signal, so a slow page may still rank highly if it has great, relevant content.

We encourage developers to think broadly about how performance affects a user’s experience of their page and to consider a variety of user experience metrics. Although there is no tool that directly indicates whether a page is affected by this new ranking factor, here are some resources that can be used to evaluate a page’s performance.

  • Chrome User Experience Report, a public dataset of key user experience metrics for popular destinations on the web, as experienced by Chrome users under real-world conditions
  • Lighthouse, an automated tool and a part of Chrome Developer Tools for auditing the quality (performance, accessibility, and more) of web pages
  • PageSpeed Insights, a tool that indicates how well a page performs on the Chrome UX Report and suggests performance optimizations

As always, if you have any questions or feedback, please visit our webmaster forums.

Posted by Zhiheng Wang and Doantam Phan
Categories: Web Design

12 Best Visual Studio Code Extensions for Web Developers

Visual Studio Code is one of the most popular source code editors for web developers. It was released in 2015 by Microsoft and offers many awesome features you can...

The post 12 Best Visual Studio Code Extensions for Web Developers appeared first on Onextrapixel.

Categories: Web Design

I Used The Web For A Day With Just A Keyboard

Smashing Magazine - Wed, 07/04/2018 - 04:30
I Used The Web For A Day With Just A Keyboard I Used The Web For A Day With Just A Keyboard Chris Ashton 2018-07-04T13:30:05+02:00 2018-07-11T12:36:25+00:00

This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs. Last time, I used the web for a day without JavaScript. Today, I forced myself to navigate the web using just my keyboard.

Who Uses The Keyboard To Navigate?

Broadly, there are three types of keyboard users:

  • Mobility-impaired users who struggle to use a mouse,
  • Vision-impaired users who are unable to see clickable elements in the page,
  • Power users who are able to use a mouse but find it quicker to use a keyboard.
How Many Users Are We Talking?

I’ve trawled the web for statistics on keyboard usage, and I couldn’t find a thing. Seriously. Not one study.

Most keyboard accessibility guidance sites simply take for granted that “many users” rely on keyboards to get around. Anyone trying to get an approximate number is usually preachily dismissed with “stats don’t matter — your site should be accessible, period.”

Yes, it is true that the scale of non-mouse usage is a moot point. If you can make a change that empowers even one user, it is a change worth making. But there are plenty of stats available around things like color blindness, browser usage, connection speeds and so on — why the caginess around keyboard statistics? If the numbers are as prevalent as sites seem to suggest, surely having them would enable a stronger business case and make defending keyboard accessibility to your stakeholders easier.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

The closest thing to a number I can find is an article on PowerMapper, which suggests that 7% of working-age adults in the US, UK, and Canada have “severe dexterity difficulties.” This would make them “unlikely to use a mouse, and rely on the keyboard instead.”

Users with severe visual disabilities use software called a screen reader, which is software that reads out content on the screen as synthesized speech. Like sighted users, non-sighted users want to be able to scan pages for interesting information, so the screen reader has keyboard shortcuts for navigating via headings and links, and relies on keyboard focusable elements for interaction.

“People who are blind need full keyboard access. Period.”

— David Macdonald, co-editor of Using WAI ARIA in HTML5

These same users also have screen readers on their mobile devices, where they use swipe gestures instead of keyboard presses to ‘tab around’ content. So whilst they’re not literally using a keyboard, they do require the site to be keyboard-accessible as the screen reader technology hooks into the same tab ordering and event listeners as if they were using a keyboard. It’s worth noting that only about two-thirds to three-quarters of screen reader users are blind, meaning the rest might use a combination of screen-reader and magnification techniques.

2.3% of American people (of all ages) have a visual disability, not all of which would necessarily warrant the use of a screen reader. In 2016, Addy Osmani estimated actual screen reader usage to be around 1 to 2%. If we factor these users in with our mobility-impaired users and our power users, keyboard usage adds up to a sizeable percentage of the global audience. Therefore, caring about keyboard accessibility is not just doing the right thing morally (and legally — many countries require websites to be accessible by law), but it also makes good business sense.

With all of that in mind, what is the state of the web today? Time to find out!

I placed coasters over my touchpad to avoid temptation in using the keyboard for this experiment. (Large preview) The Experiment

What does everyone do when they have a day’s worth of intimidating work ahead of them? Procrastinate! I headed over to youtube.com. I had a specific video in mind and was grateful to find I wouldn’t need to tab into the main search box, as it is focussed on page load by default.

The autofocus Attribute YouTube homepage with search bar already in focus (Large preview)

I assumed this would be focussed with JavaScript on window load, but it’s actually handled by the browser with an autofocus attribute on the input element.

As a sighted keyboard user, I found this extremely useful. As a blind screen reader user, I’m not sure whether I’d like it or not. The consensus seems to be that judicious use of autofocus is OK, in cases where the sole purpose of the page is to interact with a form (e.g. Google landing page, or a site contact form).

Default Focus Styles

I searched for some Whose Line Is It Anyway? goodness, and couldn’t help noticing that YouTube hadn’t defined any custom :focus styles, instead relying on the browser’s native styling to visually indicate which elements I was tabbing through.

Chrome focus styling — the famous blue outline. (Large preview)

I’ve always been under the impression that not all browsers define their own :focus state, so you have to define your own custom styling. I decided to put this to the test and see which browsers neglect to implement a default style, but to my surprise, I couldn’t find one. Every browser I tested had its own native implementation of :focus, although each varied in style.

Firefox focus styling — a dotted outline. (Large preview) Safari focus styling — similar to Chrome but the blue halo is not as thick. (Large preview) Opera focus styling is identical to Chrome, as they are both built on the Blink browser engine. (Large preview) The focus styling in Edge is much the same as in Firefox. (Large preview) IE11 underlines the link with a dotted line. (Large preview)

I even went quite far back in time:

IE7 focus styling (on XP) looks much the same as today’s Firefox implementation! (Large preview)

If you’d like to see more, there is a comprehensive screenshot collection of different elements in browser native states.

What this tells me is that you can reasonably assume every browser comes with some basic :focus styling. It is OK to let the browser do the work. What you’re risking is inconsistency: all browsers style elements subtly differently, and some are so subtle that they’re not particularly visually accessible.

It is possible to disable the default browser focus styles — by setting outline: none on your element — but you should do this only if you implement your own styled alternative. Heydon Pickering recommends this approach, citing the unclear or ugly defaults used by some browsers. If you do decide to roll out your own styles, be sure to use more than just colour as a modifier: Add an outline or an underline or some other visual indicator to support users with color-blindness.

Many sites suppress default focus styles but fail to provide custom styles, leading to inaccessible experiences. If your site is using Eric Meyer’s CSS reset, it could be inaccessible; this commonly used file resets the default :focus styles but instructs the developer to write their own, and many fail to spot the instructions.

Some people argue that it can be confusing to the user if you disable the browser defaults, as they lose the visual affordance of the focus state they’re used to and instead have to learn what your site’s focus state looks like. On the other hand, some argue that the browser defaults are ugly, or even confusing to the non-keyboard user.

Why confusing? Well, check out this animated carousel format on the BBC. There are two navigation buttons — next, and previous — and it’s useful to the keyboard user that the focus remains on them throughout the narrative. But to the mouse user, it can be quite confusing that the clicked button is still ‘focussed’ after moving the cursor away.

BBC animated carousel format (Large preview) The :focus-visible CSS Selector

If you want the best of both worlds, you may want to explore the CSS4 :focus-visible pseudo-class, which will let you provide different focus styling depending on context. :focus-visible styling only targets elements that have been focussed with keyboard, not with mouse click. This is super cool, though is currently only natively supported in Firefox. It can be enabled in Chrome by turning on the ‘Experimental Web Platform Features’ flag.

The button is green when I tab to it via keyboard, and red when I click on it. (Large preview) YouTube Videos And Keyboard Accessibility

YouTube does a great job with its video player — every part of the player is keyboard navigable. I like how the volume controls slide out when you tab focus away from the mute icon, in contrast to sliding out when hovering over the mute icon.

Large preview

What I didn’t like was that helpful labels, such as the ‘Mute’ text that appears when hovering over the mute icon, don’t get shown on focus.

Another area that lets YouTube down is that it suppresses some focus styling. Here was me trying to tab to the ‘Show more’ button.

I try to tab to the “Show more” button via the video author avatar, title and links in the description, but end up tabbing to the “Add comment” section by accident. (Large preview)

I accidentally tabbed right past the ‘Show more’ button because I couldn’t see any :focus styling applied, whether custom or native. I figured out that the native styling was being overridden with outline-width:

Unchecking the outline-width: 0 rule enabled the blue border native Chrome focus styling. (Large preview) GitHub Keyboard Accessibility

OK, work time. Where better to work than at the home of code, github.com?

I noticed three things about GitHub: One great, one reasonable, and one bad.

First, the good.

‘Skip To Content’ Link GitHub landing view… keep an eye on this corner (Large preview)

GitHub offers a Skip to content link, which skips over the main menu.

After tabbing once, a wild Skip to content link appears! (Large preview)

If you hit ENTER while focussed on the ‘Skip to content’ link, you skip all of the menu items at the top of the page and can start to tab within the main area of content, saving time when navigating. This is a common accessibility pattern that is super useful for both keyboard and screen reader users. Around 30% of screen reader users will use a skip link if you provide one.

Alternatively, some sites choose to place the main content first in the reading order, above the navigation. This approach has fallen out of fashion as it breaks the guideline of making your DOM content match the visual order (unless your navigation visually appears at the bottom). And whilst this approach means we don’t need a ‘Skip navigation’ link at all, we’d probably want a ‘Skip to navigation’ link in its place.

Tab To See Content

One feature I noticed working differently to the ‘non-keyboard’ version was the code breakdown indicator.

Using the mouse, you can click the colored bar underneath any repository to view a proportional breakdown of the different programming languages used in the repo. Using the keyboard, you can’t actually navigate to the colored bar, but the languages come into view automatically when you tab past the end of the meta information.

I tab through to the code language breakdown, before showing how it’s done with a mouse. (Large preview)

This doesn’t really seem necessary — I would happily tab to the colored bar and hit ENTER on that — but this different behavior doesn’t cause any harm either.

Invisible Links

One problematic thing I came across was that there was an “invisible” link after tabbing past my profile picture at the top right. My tab order would tab to the picture, then to this invisible link, and then to the ‘Watch’ button on the repo (see gif below). I had no idea what the invisible link did, so when I recognized I was on it, I hit ENTER and was promptly logged out!

Beware of clicking invisible links. (Large preview)

On closer inspection, it looks like I’ve navigated to a “screenreader only” form (sr-only is a common screen reader class name) which has the ‘Sign out’ feature.

Large preview

This sign-out link is in addition to the sign-out link on your profile dropdown menu:

Large preview

I’m not sure that two separate HTML sign-out links are necessary, as a screen reader user should be able to trigger the drop-down and navigate to the main sign-out link. And if we wanted to keep the separate link, I would recommend applying a :focus styling to the screen-reader content so that sighted users don’t accidentally trigger logging themselves out!

Example screen-reader text focus styling. (Large preview) How To Make A ‘Skip To Content’ Shortcut

So how do we recreate that ‘Skip to content’ shortcut? It’s pretty simple to implement, but can be deceptively tricky to get perfect — so here is what I consider to be the Holy Grail of skip links solutions.

‘Skip link’ is alternatively called ‘Skip navigation’, ‘Skip main navigation’, ‘Skip navigation links’, or ‘Skip to main content’. ‘Skip to main content’ is probably the clearest as it tells you where you are navigating to, rather than what you are skipping over.

The shortcut link should ideally appear straight after the opening <body> tag. It could appear later in the DOM, even after the footer, provided you have a tabindex="1" attribute to force it to become the first interactive element in the tab order. However, using tabindex with a number greater than zero is generally bad practice and will often result in a warning when using validation tools such as Lighthouse.

It’s not foolproof to rely on tabindex, as you may have more than one link with tabindex="1". In these cases, it is the first link that would get the tab focus first, not any later links. Read more about using the tabindex attribute here, but remember that you’re always better off physically moving your link to the beginning of the DOM to be safe.

<a class="screen-reader-shortcut" href="#main-content"> Skip to main content </a>

The ‘Skip to main content’ link has limited use to sighted users, who can already skip the navigation by using their eyes. So, whilst some sites keep the skip link visible at all times, the convention nowadays is to keep the link hidden until you tab into it, at which point it is in focus and gains the styling applied by the :focus pseudo selector.

.screen-reader-shortcut { position: absolute; top: -1000em; } .screen-reader-shortcut:focus { position: fixed; top: 0; left: 0; z-index: 999; /* ...and now any nice styling you want to apply... */ padding: 1em; background-color: rgb(114, 105, 105); color: white; text-decoration: none; }

So, what are we actually skipping to? What is #main-content? It can really be anything:

  1. Inline content
    i.e. the id of your h1 tag: <h1 id="main-content">.
  2. Container
    e.g. the id of the container around your main content such as <main id="main-content">.
  3. Sibling anchor
    You can link to a named tag just above your main content, e.g. <a name="main-content"></a>. This approach is usually described in older tutorials — I wouldn’t recommend it these days.

For maximum compatibility across all screen readers, I’d recommend linking to the h1 tag. This is to ensure that the content gets read out as soon as you’ve used the skip link. Linking to containers can lead to funny behavior, e.g. the screen reader starting to read out all the content inside the container.

Your #main-content should also have a tabindex of -1, to ensure that it is programmatically focussable. Some screen readers may not obey the skip link otherwise.

<h1 id="main-content" tabindex="-1">This is the title of the page</h1>

One last consideration: legacy browser support. If you have enough users on IE9 or below, you may need to apply a small JavaScript fix to your skip links to ensure that the focus does actually shift as expected and your users successfully skip your navigation.

Why Are We Reinventing The Wheel?

It seems crazy that as web developers we have to implement this ‘skip navigation’ hack on all of our sites as a rule. You would think we could let the standards do the work.

Since HTML5, we’ve had semantic elements such as <main>, <nav> and <header>. Prior to that, we had ARIA landmarks such as role="main", role="navigation" and role="banner" respectively. In the current landscape of the web, best practice dictates that you need both, i.e. <main role="main">, which is a horrid violation of the DRY principle, but there we go.

With all this semantic richness, you’d hope that browsers would start natively supporting navigation via these landmark areas, for example by exposing a keyboard shortcut for users to tab straight into the <main> section of a web page. No such luck — there is no native support at the moment. Your best bet is to use the Landmark Navigation via Keyboard extension for Chrome, Opera or Firefox.

Screen reader users, however, can start navigating directly to these landmark regions. For example, on VoiceOver on Mac, you can hit CTRL + ALT + U to bring up the Landmarks Menu and go to the ‘main’ landmark, which is a quick and consistent shortcut to get to the main content. Of course, this relies on sites marking up their documents correctly.

Here is a good starting point for your site if you’d like it to be navigable via landmark regions:

<body> <header role="banner"> <!-- Logo and things can go here --> <nav role="navigation"> <!-- Site navigation links go here --> </nav> </header> <main role="main"> <!-- Main content lives here - including our h1 --> </main> <footer role="contentinfo"> <!-- Copyright statement, etc --> </footer> </body>

All this markup is thirsty work. Time for a coffee.

Pact Coffee

I remember seeing a flyer for pactcoffee.com… let’s go and take a look!

Cookie Banner Large preview

The ‘Cookie policy’ banner is one of the first things you notice here, and dismissing it is almost an instinctive reflex for the sighted mouse user. Some screen reader users may not care about it (if you’re blind, you wouldn’t know it’s there until you reach it), but as a sighted user, you see it, you want to kill it, and in the case of this site, you need to tab past ALL OF THE OTHER LINKS before you can dismiss it.

I used the ChromeLens accessibility extension to trace the tab order of the page:

I have to tab through every single link in the page before I can dismiss the cookie banner. (Large preview)

This can be fixed by either moving the notice to the top of the document (it can still be anchored to the bottom visually with CSS), or by adding a tabindex="1" to the OK button. I would suggest applying this fix to any content where the expectation is that the user will want to dismiss it.

More Invisible Links

Like on GitHub, I found myself tabbing to an off-screen element whose purpose wasn’t clear. It turned out to be a ‘See less…’ toggle that sits behind the ‘See more…’ card.

Tabbing from ‘See more’, to a hidden area, to another ‘See more’ button. What’s that mystery hidden area? Oh, it’s the ‘See less’ button “on the other side”. (Large preview)

This is because the ‘hidden’ area isn’t really hidden, it’s just rotated 180 degrees, using:

transform: rotateY(180deg);

…which means the ‘See less…’ button is still part of the tab order. This can be fixed by applying a display: none until the application is ready to trigger the rotation:

Applying display: none to the ‘See less…’ link takes it out of the tab order and makes for a less confusing keyboard experience. (Large preview)

Coffee ordered. It’s now time to carry on with my research.

IT World

I was doing some research for this article and came across a similar experiment to my own; Kevin Purdy browsed the web for seven days using only his keyboard. I find it ironic that I was unable to read his article under the same constraints!

The problem was a full-page cookie banner, requiring me to “Update Privacy Settings” or accept the default cookie settings. No matter how many times I tabbed, I could not focus in on the cookie banner and dismiss it.

Holding down TAB didn’t help. (Large preview)

I dug into the source code to find out what was going on. For a moment, I thought it might be our arch nemesis, the outline CSS property.

Large preview

Inspecting the “Update Privacy Setting” link, I can see an outline: 0 as I suspected. So perhaps I am focussing on the buttons, but there is no visual feedback when that happens?

I tried setting the state to :hover to see if I was missing out on any styling as a keyboard user:

Large preview

Sure enough, the link turned a nice, obvious orange colour on hover — something I never saw on focus:

Large preview

Hoorah! Cracked it! I never saw the :focus state because custom styling was only being applied on :hover. I must have skipped past the buttons without even noticing, right?

Wrong. Even when I hack the CSS locally, I could not see any focus styling, meaning I wasn’t even getting as far as tabbing into the cookie modal. Then I realised… the link was missing a href attribute:

Large preview

That was the real culprit. The outline: 0 wasn’t the problem — the browser was never going to tab to the link because it wasn’t a valid link!

From the HTML 5.2 specification:

The destination of the link(s) is given by the href attribute, which must be present and must contain a valid non-empty URL potentially surrounded by spaces. If the href attribute is absent, then the element does not define a link.

Giving the links a href attribute — even if it’s just # — would make them valid links and would add them to the tab order of the page.

Funnily enough, later on that day, I was sent an article on PC World to read and I encountered exactly the same problem.

Large preview

It seems that both sites were using the same Consent Management Platform (CMP). I did a little digging and deduced that it was affecting a number of sites owned by the same company, and have since contacted them directly with a suggested fix.


My kitchen tap is leaking and I’ve been meaning to get it replaced. I saw an ad in the local paper for kinetico.co.uk, so thought I’d take a look.

It’s impossible to navigate to the nested menu items via a keyboard. (Large preview)

I couldn’t navigate to the ‘Kitchen Taps’ section, as the link was tucked away behind a ‘Salt & Cartridges’ parent link which only shows its child links on hover. It’s interesting that the site is forward-thinking enough to provide a ‘Skip to Content’ link (seen briefly in the gif above) but was unable to create an accessible menu!

Here is where the menu goes wrong — it only shows the sub menu when the parent menu item is being hovered over:

Fixing it is easier said than done. In most cases, you can just “double up” your selector to apply to focus too:

li:hover .nav_sub_menu, li:focus .nav_sub_menu { }

But this doesn’t work in this case because whilst the <li> element is hoverable, it isn’t focusable. It’s the link inside the <li> that is focusable. But the submenu isn’t inside the link, it’s next to it, so we need to apply the sibling selector to show the submenu when the link is in focus.

li:hover .nav_sub_menu, a:focus + .nav_sub_menu { }

This tweak means we can see our submenu when we tab to the parent menu item on the keyboard. But what happens when you try to tab into the submenu?

We can never tab to the ‘Frozen food’ child link of ‘Browse by Type’. (Large preview)

When we tab from the parent menu item, the focus shifts to the first link in the child menu as expected. But this moves focus away from the parent menu link, meaning the submenu gets hidden and the child menu items are removed from the tab order again!

This is a problem that can be solved with :focus-within, which lets you apply styling to a parent element if it or any of its child elements has the focus. So, in this case, we have to triple up:

li:hover .nav_sub_menu, /* hover over parent menu item, show child menu */ a:focus + .nav_sub_menu, /* focus onto parent menu item, show child menu */ .nav_sub_menu:focus-within { /* focus onto child menu item, keep showing child menu */ }

Our menu is now fully keyboard-accessible through pure CSS. I love creative CSS solutions, but a word of warning here: quite a lot “CSS-only” solutions in the wild fall down when it comes to keyboard navigation. Avoiding JavaScript doesn’t necessarily make a site more accessible.

We can now tab through all the submenu items. (Large preview)

In fact, a JS-driven menu might be a better shout in this case, as browser support for this solution is still quite poor. :focus-within can currently only be used in Chrome, Firefox, and Safari. Even in Chrome, I found it to be incompatible with the display: none logic used to show/hide the child menu; I had to hide my menu items by setting opacity: 0 instead.

OK, I’m done for the day. It’s now time to wind down with a bit of social media.


Facebook does an incredible job here, providing a masterclass in keyboard accessibility.

On the very first TAB press, a hidden menu opens up, providing shortcuts to the most popular sections of the current page and links to other popular pages.

Facebook hidden menu exposing accessibility options (Large preview)

When you cycle through the page sections using the arrow keys, those sections are highlighted visually so that you can see where you would be tabbing to.

When I focus on the ‘Navigate Facebook’ option in the dropdown, the corresponding section is highlighted in blue. (Large preview)

The most useful feature is that Facebook provides a OPT + / (or ALT + /) shortcut to get back to the menu at any time, making use of the aria-keyshortcuts attribute.

<div class="a11y-help"> Press opt + / to open this menu </div> <div aria-label="Navigation Assistant" aria-keyshortcuts="Alt+/" role="menubar"> <a class="screen-reader-shortcut" tabindex="1" href="#main-content"> Skip to main content </a> </div>

Unlike the ‘skip to main content’ link, which is built on top of native anchoring technology and “just works”, the aria-keyshortcuts attribute requires the author to implement all the keyboard behavior, so you’re going to have to write some custom JavaScript if you want to use this.

Here is some JS which hides and shows the menubar area, which is a useful starting point:

const a11yArea = document.querySelector('*[role="menubar"]'); document.addEventListener('keydown', (e) => { if (e.altKey && e.code === 'Slash') { a11yArea.style.display = a11yArea.style.display === 'block' ? 'none' : 'block'; } }); Summary

This experiment has been a mixed bag of great keyboard experiences and poor ones. I have three main takeaways.

Keep It Stylish

By far the most common keyboard accessibility issue I’ve faced today is a lack of focus styling for tabbable elements. Suppressing native focus styles without defining any custom focus styles makes it extremely difficult, even impossible, to figure out where you are on the page. Removing the outline is such a common faux pas that there’s even a site dedicated to it.

Ensuring that native or custom focus styling is visible is the single most impactful thing you can do in the area of keyboard accessibility, and it’s often one of the easiest; a simple case of doubling up selectors on your existing :hover styling. If you only do one thing after reading this article, it should be to search for outline: 0 and outline: none in your CSS.

Semantics Are Key

How many times have you tried opening a link in a new tab, only for your current window to get redirected? It happens to me every now and again, and annoying as it is, I’m lucky that it’s one of the only usability issues I tend to face when I use the web. Such issues arise from misusing the platform.

Let’s look at this code here:

<span onclick="window.location = 'https://google.com'">Click here</span>

An able, sighted user would be able to click on the  <span> and be redirected to Google. However, because this is a <span> and not a link or a button, it doesn’t automatically have any focusability, so a keyboard or screen reader would have no way of interacting with it.

Keyboard-users are standards-reliant users, whereas the able, sighted demographic is privileged enough to be able to interact with the element despite its non-conformance.

Use the native features of the platform. Write good, clean HTML, and use validators such as https://validator.w3.org to catch things like missing href attributes on your anchors.

Content Is Key

You may be required to display cookie notices, subscription forms, adverts or adblock notices.

Do what you can to make these experiences unobtrusive. If you can’t make them unobtrusive, at least make them dismissible.

Users are there to see your content, not your banners, so put these dismissible elements first in your DOM so that they can be quickly dismissed, or fall back to using tabindex="1" if you can’t move them.

Finally, support your users in getting to your content as quickly as they can, by implementing the Holy Grail of ‘skip to main content’ links.

Stay tuned for the next article in the series, where I will be building upon some of these techniques when I use a screen reader for a day.

(rb, ra, il)
Categories: Web Design

CSS Grid Level 2: Here Comes Subgrid

Smashing Magazine - Tue, 07/03/2018 - 04:00
CSS Grid Level 2: Here Comes Subgrid CSS Grid Level 2: Here Comes Subgrid Rachel Andrew 2018-07-03T13:00:47+02:00 2018-07-11T12:36:25+00:00

We are now over a year on from CSS Grid Layout landing in the majority of our browsers, and the CSS Working Group are already working on Level 2 of the specification. In this article, I’m going to explain what is currently part of the Working and Editor’s Draft of that spec. Note that everything here is subject to change, and none of it currently works in browsers. Take this as a peek into the process, I’m sure I’ll be writing more practical pieces as we start to see implementations take shape.

CSS Specification Levels

The CSS Grid features we can currently use in browsers are those from Level 1 of the CSS Grid specification. The various parts of CSS are broken up into modules; this modularisation happened when CSS moved on from CSS 2.1, which is why you sometimes hear people talking about CSS3. In reality, there is no CSS3. Instead, there were a set of modules which included all of the things that were already part of the CSS2.1 specification. Any CSS that existed in CSS2.1 became part of a Level 3 module, therefore, we have CSS Selectors Level 3, as selectors existed in CSS2.1.

New CSS features which were not part of CSS2.1, such as CSS Grid Layout, start out at Level 1. The CSS Grid Level 1 specification is essentially the first version of Grid. Once a specification Level gets to Candidate Recommendation status, major new features are not added. This means that browsers and other user agents can implement the spec and it can become a W3C Recommendation. If new features are to be designed, they will happen in a new Level of the specification. We are at this point with CSS Grid Layout. The Level 1 specification is at CR, and a Level 2 specification has been created in order for new features to be worked on. I would suggest looking at the Editor’s Draft if you want to follow along with specification discussions, as this will contain all of the latest edits.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → What Will Level 2 Of CSS Grid Contain?

Ultimately, the level 2 specification will contain everything that is already in Level 1 plus some new features. If you take a look at the specification at the time of writing, there is a note explaining that all of Level 1 should be copied over once Level 2 reaches CR.

We can then expect to find some new features, and Level 2 of the Grid Specification is all about working out the subgrid feature of CSS Grid. This feature was dropped from the Level 1 specification in order to allow time to properly understand the use cases for subgrid, and give more time to work on it without holding up the rest of Level 1. In the rest of this article, I’ll be taking a look at the subgrid feature as it is currently detailed in the Editor’s Draft. We are at a very early stage with the feature, however, this is the perfect time to follow along, and to actually help shape how the specification is developed. My aim with writing this article is to explain some of the things being discussed, in order that you can understand and bring your input to discussions.

What Is A Subgrid?

When using CSS Grid Layout, you can already nest grids. In the example below, I have a parent grid with six column tracks and three-row tracks. I have positioned an item on this grid from column line 2 to line 6 and from row line 1 to 3. I have then made that item a grid container and defined column tracks.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: 2fr 1fr 2fr 1fr; }

The tracks of our nested grid have no relationship to tracks on the parent. This means that if we want to be able to line the tracks of our nested grid up with the lines on the outer grid, we have to do the work and use methods of calculating track sizes that ensure all tracks remain equal. In the example above, the tracks will look lined up, until an item with a larger size is added to one cell of the grid (making it use more space).

A small item means the tracks look as if they line up. (Large preview) With a large item, we can see the tracks do not align. (Large preview)

For columns, it is often possible to get around the above scenario, essentially by restricting the flexibility of grid. You could make your fr unit columns minmax(0,1fr) in order that they ignore item size when doing space distribution, or you could go back to using percentages. However, this removes some of the benefits of using grid and, when it comes to lining up rows in a nested grid these methods will not work.

Let’s say we want a card layout in which the individual cards have a header, body, and footer. We also want the header and footer to line up across the cards.

.cards { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-gap: 20px; } .card { display: grid; grid-template-rows: auto 1fr auto; } A set of cards (Large preview)

This works as long as the content is the same height in each header and footer. If we have extra content then the illusion is broken and the headers and footers no longer line up across the row.

We can’t get the headers to line up across the cards. (Large preview) Creating A Subgrid

We can now take a look at how the subgrid feature is currently specified, and how it might solve the problems I’ve shown above.

Note: At the time of writing, none of the code below works in browsers. The aim here is to explain the syntax and concepts. The final specification is also likely to change from these details. For reference, I have written this article based on the Editor’s Draft available on June 23rd, 2018.

To create a subgrid, we will have a new value for grid-template-rows and grid-template-columns. These properties are normally used with a track listing, which defines the number and size of the row and column tracks. When creating a subgrid, however, you do not want to specify these tracks. Instead, you use the subgrid value to tell grid that this nested grid should use the number of tracks and track sizing that the grid area it covers spans.

In the below code, I have a parent grid with 6-column tracks and 3-row tracks. The nested grid is a grid item on that parent grid and spans from column line 2 to column line 6 and from row line 1 to row line 4. This is just like our initial example, however, we can now take a look at it using subgrid. The nested grid has a value of subgrid for both grid-template-columns and grid-template-rows. This means that the nested grid now has 4- column tracks and 2-row tracks, using the same sizing as the tracks defined on the parent.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; } The nested grid is using the tracks defined on the parent. (Large preview)

This would mean that any change to the track sizing on the parent would be followed by the nested grid. A longer word making one of the tracks in the parent grid wider would result in that track in the nested grid also becoming wider, so things would continue to line up. This would also work the other way: the tracks of the parent grid could become wider based on the content in the subgrid.

One-Dimensional Subgrids

You can have a subgrid in one dimension and specify track sizing in another. In this next example, the subgrid is only specified on grid-template-columns. The grid-template-rows property has a track listing specified. The column tracks will therefore remain as the four tracks we saw above, but the row tracks can be defined separately to the tracks of the parent.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: 10em 5em 200px 200px; }

This means that the rows of the subgrid will be nested inside the parent grid, just as when creating a nested grid today. As our nested grid spans two rows of the parent, one or both of these rows will need to expand to contain the content of the subgrid so as not to cause overflows.

You could also have a subgrid in one dimension and the other dimension use implicit tracks. In the below example, I have not specified any row tracks, and gave a value for grid-auto-rows. Rows will be created in the implicit grid at the size I specified and, as with the previous example, the parent will need to have room for these rows or to expand to contain them.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-auto-rows: minmax(200px, auto); } Line Numbering And Subgrid

If we take a look at our first example again, the track sizing of our subgrid is dictated by the parent in both dimensions. The line numbers, however, act as normal in the subgrid. The first column line in the inline direction is line 1, and the line at the far end of the inline direction is line -1. You do not refer to the lines of the subgrid with the line number of the parent.

.grid { display: grid; grid-template-columns: 1fr 2fr 1fr 2fr 1fr 2fr; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: subgrid; } .subitem { grid-column: 2 / 4; grid-row: 2; } The nested grid starts numbering at line 1. (Large preview) Gaps And Subgrids

The subgrid will inherit any column or row gap set on the parent grid, however, this can be overruled by column and row gaps specified on the subgrid. If, for example the parent grid had a column-gap set to 20px, but the subgrid then had column-gap set to 0, the grid cells of the subgrid would gain 10px on each side in order to reduce the gap to 0, with the grid line essentially running down the middle of the gap.

We can now see how subgrid would help us to solve the second use case from the beginning of this article, that of having cards with headers and footers that line up across the cards.

.grid { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-auto-rows: auto 1fr auto; grid-gap: 20px; } .card { grid-row: auto / span 3; /* use three rows of the parent grid */ display: grid; grid-template-rows: subgrid; grid-gap: 0; /* set the gap to 0 on the subgrid so our cards don’t have gaps */ } The Card Internals Now Line Up (Large preview) Line Names And Subgrid

Any line names on your parent grid will be passed down to the subgrid. Therefore, if we named the lines on our parent grid, we could position the item according to those line names.

.grid { display: grid; grid-template-columns: [a] 1fr [b] 2fr [c] 1fr [d] 2fr [e] 1fr [f] 2fr [g]; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid; grid-template-rows: 10em 5em 200px 200px; } .subitem { grid-column: c / e; } The line names on the parent apply to the subgrid. (Large preview)

You can also add line names to your subgrid, grid lines can have multiple line names so these names would be added to the lines. To specify line names, add a listing of these names after the subgrid value of grid-template-columns and grid-template-rows. If we take our above example and also add names to the subgrid lines we will end up with two line names for any line in the subgrid.

.grid { display: grid; grid-template-columns: [a] 1fr [b] 2fr [c] 1fr [d] 2fr [e] 1fr [f] 2fr [g]; grid-template-rows: auto auto auto; } .item { grid-column: 2 / 6; grid-row: 1 / 3; display: grid; grid-template-columns: subgrid [sub-a] [sub-b] [sub-c] [sub-d] [sub-e]; grid-template-rows: 10em 5em 200px 200px; } .subitem { grid-column: c / e; } The Line Names Specified on the subgrid are added to those of the parent. (Large preview) Implicit Tracks And Subgrid

Once you have decided that a dimension of your grid is a subgrid, this removes the ability to have any additional implicit tracks in that dimension. If you add more items that can fit, the additional items will be placed in the last available track of the subgrid in the same way that items are dealt with in overly large grids. A Grid Area created in the subgrid that spans more tracks than are available, will have its last line set to the last line of the subgrid.

As explained above, however, you can have one dimension of your subgrid behave in exactly the same way as a normal nested grid, including implicit tracks.

Getting Involved With The Process

The work of the CSS Working Group happens in public, on GitHub just like any other open-source project. This makes it somewhat easier to follow along with the work that it was the everything happened in a mailing list. You can take a look at the issues raised against Level 2 of the CSS Grid specification by searching for issues tagged as css-grid-2 in the CSS Working Group GitHub repository. If you can contribute thoughts or a use case to any of those issues, it would be welcomed.

There are other features that people have requested for CSS Grid Layout, and the fact that they haven’t been included in Level 2 does not mean they are not being considered. You can see the levels as a feature release might be in a product, just because some feature isn’t part of the current sprint, doesn’t mean it will never happen. Work on new web platform features tends to take a little longer than the average product release, but it is a similar process.

How Long Does This All Take?

Specification development and browser implementation is a somewhat circular, iterative process. It is not the case that the specification needs to be “finished” before we will see some browser implementations. The initial implementations are likely to be behind feature flags — just as the original grid specification was. Keep an eye out for these appearing, as once there is code to play with it makes thinking about these features far easier!

I hope this tour of what might be coming soon has been interesting. I’m excited that the subgrid feature is underway, as I have always believed it vital for a full grid layout system for the web, watch this space for more news on how the feature is progressing and of emerging browser implementations.

Categories: Web Design

Building Mobile Apps With Capacitor And Vue.js

Smashing Magazine - Mon, 07/02/2018 - 05:00
Building Mobile Apps With Capacitor And Vue.js Building Mobile Apps With Capacitor And Vue.js Ahmed Bouchefra 2018-07-02T14:00:41+02:00 2018-07-11T12:36:25+00:00

Recently, the Ionic team announced an open-source spiritual successor to Apache Cordova and Adobe PhoneGap, called Capacitor. Capacitor allows you to build an application with modern web technologies and run it everywhere, from web browsers to native mobile devices (Android and iOS) and even desktop platforms via Electron — the popular GitHub platform for building cross-platform desktop apps with Node.js and front-end web technologies.

Ionic — the most popular hybrid mobile framework — currently runs on top of Cordova, but in future versions, Capacitor will be the default option for Ionic apps. Capacitor also provides a compatibility layer that permits the use of existing Cordova plugins in Capacitor projects.

Aside from using Capacitor in Ionic applications, you can also use it without Ionic with your preferred front-end framework or UI library, such as Vue, React, Angular with Material, Bootstrap, etc.

In this tutorial, we’ll see how to use Capacitor and Vue to build a simple mobile application for Android. In fact, as mentioned, your application can also run as a progressive web application (PWA) or as a desktop application in major operating systems with just a few commands.

We’ll also be using some Ionic 4 UI components to style our demo mobile application.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Capacitor Features

Capacitor has many features that make it a good alternative to other solutions such as Cordova. Let’s see some of the features of Capacitor:

  • Open-source and free
    Capacitor is an open-source project, licensed under the permissive MIT license and maintained by Ionic and the community.
  • Cross-platform
    You can use Capacitor to build apps with one code base and to target multiple platforms. You can run a few more command line interface (CLI) commands to support another platform.
  • Native access to platform SDKs
    Capacitor doesn’t get in the way when you need access to native SDKs.
  • Standard web and browser technologies
    An app built with Capacitor uses standard web APIs, so your application will also be cross-browser and will run well in all modern browsers that follow the standards.
  • Extensible
    You can access native features of the underlying platforms by adding plugins or, if you can’t find a plugin that fits your needs, by creating a custom plugin via a simple API.

To complete this tutorial, you’ll need a development machine with the following requirements:

  • You’ll need Node v8.6+ and npm v5.6+ installed on your machine. Just head to the official website and download the version for your operating system.
  • To build an iOS app, you’ll need a Mac with Xcode.
  • To build an Android app, you’ll need to install the Java 8 JDK and Android Studio with the Android SDK.
Creating A Vue Project

In this section, we’ll install the Vue CLI and generate a new Vue project. Then, we’ll add navigation to our application using the Vue router. Finally, we’ll build a simple UI using Ionic 4 components.

Installing The Vue CLI v3

Let’s start by installing the Vue CLI v3 from npm by running the following from the command line:

$ npm install -g @vue/cli

You might need to add sudo to install the package globally, depending on your npm configuration.

Generating a New Vue Project

After installing the Vue CLI, let’s use it to generate a new Vue project by running the following from the CLI:

$ vue create vuecapacitordemo

You can start a development server by navigating within the project’s root folder and running the following command:

$ cd vuecapacitordemo $ npm run serve

Your front-end application will be running from http://localhost:8080/.

If you visit http://localhost:8080/ in your web browser, you should see the following page:

A Vue application (View large version) Adding Ionic 4

To be able to use Ionic 4 components in your application, you’ll need to use the core Ionic 4 package from npm.

So, go ahead and open the index.html file, which sits in the public folder of your Vue project, and add the following &lt;script src='https://unpkg.com/@ionic/core@4.0.0-alpha.7/dist/ionic.js'&gt;&lt;/script&gt; tag in the head of the file.

This is the contents of public/index.html:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <link rel="icon" href="<%= BASE_URL %>favicon.ico"> <title>vuecapacitordemo</title> </head> <body> <noscript> <strong>We’re sorry but vuecapacitordemo doesn’t work properly without JavaScript enabled. Please enable it to continue.</strong> </noscript> <div id="app"></div> <!-- built files will be auto injected --> <script src='https://unpkg.com/@ionic/core@4.0.0-alpha.7/dist/ionic.js'></script> </body> </html>

You can get the current version of the Ionic core package from npm.

Now, open src/App.vue, and add the following content within the template tag after deleting what’s in there:

<template> <ion-app> <router-view></router-view> </ion-app> </template>

ion-app is an Ionic component. It should be the top-level component that wraps other components.

router-view is the Vue router outlet. A component matching a path will be rendered here by the Vue router.

After adding Ionic components to your Vue application, you are going to start getting warnings in the browser console similar to the following:

[Vue warn]: Unknown custom element: <ion-content> - did you register the component correctly? For recursive components, make sure to provide the "name" option. found in ---> <HelloWorld> at src/components/HelloWorld.vue <App> at src/App.vue <Root>

This is because Ionic 4 components are actually web components, so you’ll need to tell Vue that components starting with the ion prefix are not Vue components. You can do that in the src/main.js file by adding the following line:

Vue.config.ignoredElements = [/^ion-/]

Those warnings should now be eliminated.

Adding Vue Components

Let’s add two components. First, remove any file in the src/components folder (also, remove any import for the HelloWorld.vue component in App.vue), and add the Home.vue and About.vue files.

Open src/components/Home.vue and add the following template:

<template> <ion-app> <ion-header> <ion-toolbar color="primary"> <ion-title> Vue Capacitor </ion-title> </ion-toolbar> </ion-header> <ion-content padding> The world is your oyster. <p>If you get lost, the <a href="https://ionicframework.com/docs">docs</a> will be your guide.</p> </ion-content> </ion-app> </template>

Next, in the same file, add the following code:

<script> export default { name: 'Home' } </script>

Now, open src/components/About.vue and add the following template:

<template> <ion-app> <ion-header> <ion-toolbar color="primary"> <ion-title> Vue Capacitor | About </ion-title> </ion-toolbar> </ion-header> <ion-content padding> This is the About page. </ion-content> </ion-app> </template>

Also, in the same file, add the following code:

<script> export default { name: 'About' } </script> Adding Navigation With Vue Router

Start by installing the Vue router, if it’s not already installed, by running the following command from the root folder of your project:

npm install --save vue-router

Next, in src/main.js, add the following imports:

import Router from 'vue-router' import Home from './components/Home.vue' import About from './components/About.vue'

This imports the Vue router and the “Home” and “About” components.

Add this:


Create a Router instance with an array of routes:

const router = new Router({ routes: [ { path: '/', name: 'Home', component: Home }, { path: '/about', name: 'About', component: About } ] })

Finally, tell Vue about the Router instance:

new Vue({router, render: h => h(App) }).$mount('#app')

Now that we’ve set up routing, let’s add some buttons and methods to navigate between our two “Home” and “About” components.

Open src/components/Home.vue and add the following goToAbout() method:

... export default { name: 'Home', methods: { goToAbout () { this.$router.push('about') },

In the template block, add a button to trigger the goToAbout() method:

<ion-button @click="goToAbout" full>Go to About</ion-button>

Now we need to add a button to go back to home when we are in the “About” component.

Open src/components/About.vue and add the goBackHome() method:

<script> export default { name: 'About', methods: { goBackHome () { this.$router.push('/') } } } </script>

And, in the template block, add a button to trigger the goBackHome() method:

<ion-button @click="goBackHome()" full>Go Back!</ion-button>

When running the application on a real mobile device or emulator, you will notice a scaling issue. To solve this, we need to simply add some meta tags that correctly set the viewport.

In public/index.html, add the following code to the head of the page:

<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no"> <meta name="format-detection" content="telephone=no"> <meta name="msapplication-tap-highlight" content="no"> Adding Capacitor

You can use Capacitor in two ways:

  • Create a new Capacitor project from scratch.
  • Add Capacitor to an existing front-end project.

In this tutorial, we’ll take the second approach, because we created a Vue project first, and now we’ll add Capacitor to our Vue project.

Integrating Capacitor With Vue

Capacitor is designed to be dropped into any modern JavaScript application. To add Capacitor to your Vue web application, you’ll need to follow a few steps.

First, install the Capacitor CLI and core packages from npm. Make sure you are in your Vue project, and run the following command:

$ cd vuecapacitordemo $ npm install --save @capacitor/core @capacitor/cli

Next, initialize Capacitor with your app’s information by running the following command:

$ npx cap init

We are using npx to run Capacitor commands. npx is an utility that comes with npm v5.2.0 and that is designed to make it easy to run CLI utilities and executables hosted in the npm registry. For example, it allows developers to use locally installed executables without having to use the npm run scripts.

The init command of Capacitor CLI will also add the default native platforms for Capacitor, such as Android and iOS.

You will also get prompted to enter information about your application, such as the name, the application’s ID (which will be mainly used as a package name for the Android application) and the directory of your application.

After you’ve inputted the required details, Capacitor will be added to your Vue project.

You can also provide the application’s details in the command line:

$ npx cap init vuecapacitordemo com.example.vuecapacitordemo

The application’s name is vuecapacitordemo, and its ID is com.example.vuecapacitordemo. The package name must be a valid Java package name.

You should see a message saying, “Your Capacitor project is ready to go!”

You might also notice that a file named capacitor.config.json has been added to the root folder of your Vue project.

Just like the CLI suggests when we’ve initialized Capacitor in our Vue project, we can now add native platforms that we want to target. This will turn our web application into a native application for each platform that we add.

But just before adding a platform, we need to tell Capacitor where to look for the built files — that is, the dist folder of our Vue project. This folder will be created when you run the build command of the Vue application for the first time (npm run build), and it is located in the root folder of our Vue project.

We can do that by changing webDir in capacitor.config.json, which is the configuration file for Capacitor. So, simply replace www with dist. Here is the content of capacitor.config.json:

{ "appId": "com.example.vuecapacitordemo", "appName": "vuecapacitordemo", "bundledWebRuntime": false, "webDir": "dist" }

Now, let’s create the dist folder and build our Vue project by running the following command:

$ npm run build

After that, we can add the Android platform using the following:

npx cap add android

If you look in your project, you’ll find that an android native project has been added.

That’s all we need to integrate Capacitor and target Android. If you would like to target iOS or Electron, simply run npx cap add ios or npx cap add electron, respectively.

Using Capacitor Plugins

Capacitor provides a runtime that enables developers to use the three pillars of the web — HTML, CSS and JavaScript — to build applications that run natively on the web and on major desktop and mobile platforms. But it also provides a set of plugins to access native features of devices, such as the camera, without having to use the specific low-level code for each platform; the plugin does it for you and provides a normalized high-level API, for that matter.

Capacitor also provides an API that you can use to build custom plugins for the native features not covered by the set of official plugins provided by the Ionic team. You can learn how to create a plugin in the docs.

You can also find more details about available APIs and core plugins in the docs.

Example: Adding a Capacitor Plugin

Let’s see an example of using a Capacitor plugin in our application.

We’ll use the “Modals” plugin, which is used to show native modal windows for alerts, confirmations and input prompts, as well as action sheets.

Open src/components/Home.vue, and add the following import at the beginning of the script block:

import { Plugins } from '@capacitor/core';

This code imports the Plugins class from @capacitor/core.

Next, add the following method to show a dialog box:

… methods: { … async showDialogAlert(){ await Plugins.Modals.alert({ title: 'Alert', message: 'This is an example alert box' }); }

Finally, add a button in the template block to trigger this method:

<ion-button @click="showDialogAlert" full>Show Alert Box</ion-button>

Here is a screenshot of the dialog box:

A native modal box (View large version)

You can find more details in the docs.

Building the App for Target Platforms

In order to build your project and generate a native binary for your target platform, you’ll need to follow a few steps. Let’s first see them in a nutshell:

  1. Generate a production build of your Vue application.
  2. Copy all web assets into the native project (Android, in our example) generated by Capacitor.
  3. Open your Android project in Android Studio (or Xcode for iOS), and use the native integrated development environment (IDE) to build and run your application on a real device (if attached) or an emulator.

So, run the following command to create a production build:

$ npm run build

Next, use the copy command of the Capacitor CLI to copy the web assets to the native project:

$ npx cap copy

Finally, you can open your native project (Android, in our case) in the native IDE (Android Studio, in our case) using the open command of the Capacitor CLI:

$ npx cap open android

Either Android Studio will be opened with your project, or the folder that contains the native project files will be opened.

Capacitor project opened in Android Studio (View large version)

If that doesn’t open Android Studio, then simply open your IDE manually, go to “File” → “Open…”, then navigate to your project and open the android folder from within the IDE.

You can now use Android Studio to launch your app using an emulator or a real device.

Capacitor demo project (View large version) Conclusion

In this tutorial, we’ve used Ionic Capacitor with Vue and Ionic 4 web components to create a mobile Android application with web technologies. You can find the source code of the demo application we’ve created throughout this tutorial in the GitHub repository.

(lf, ra, yk, al)
Categories: Web Design

8 Best Atom Packages for Web Developers

Atom is one of the most popular and feature-rich source code editors for web developers. Originally, Atom was GitHub’s internal tool. Later, they decided to open-source it for the...

The post 8 Best Atom Packages for Web Developers appeared first on Onextrapixel.

Categories: Web Design

Creating Pretty Popup Messages Using SweetAlert2

Tuts+ Code - Web Development - Sat, 06/30/2018 - 06:36

Every now and then, you will have to show an alert box to your users to let them know about an error or notification. The problem with the default alert boxes provided by browsers is that they are not very attractive. When you are creating a website with great color combinations and fancy animation to improve the browsing experience of your users, the unstyled alert boxes will seem out of place.

In this tutorial, you will learn about a library called SweetAlert2 that allows us to create all kinds of alert messages which can be customized to match the look and feel of our own website.

Display Simple Alert Messages

Before you can show all those sweet alert messages to your users, you will have to install the library and include it in your project. If you are using npm or bower, you can install it by running the following commands:

npm install sweetalert2 bower install sweetalert2

You can also get a CDN link for the latest version of the library and include it in your webpage using script tags:

<script src="https://cdn.jsdelivr.net/npm/sweetalert2@7.12.15/dist/sweetalert2.all.min.js"></script>

Besides the JavaScript file, you will also have to load a CSS file which is used to style all the alert boxes created by the library:

<link rel='stylesheet' href='https://cdn.jsdelivr.net/npm/sweetalert2@7.12.15/dist/sweetalert2.min.css'>

Once you have installed the library, creating a sweet alert is actually very easy. All you have to do is call the swal() function. Just make sure that the function is called after the DOM has loaded.

There are two ways to create a sweet alert using the swal() function. You can either pass the title, body text and icon value in three different arguments or you can pass a single argument as an object with different values as its key-value pairs. Passing everything in an object is useful when you want to specify values for multiple arguments.

When a single argument is passed and it is a string, the sweet alert will only show a title and an OK button. Users will be able to click anywhere outside the alert or on the OK button in order to dismiss it.

When two arguments are passed, the first one becomes the title and the second one becomes the text inside the alert. You can also show an icon in the alert box by passing a third argument. This can have any of the five predefined values: warning, error, success, info, and question. If you don't pass the third argument, no icon will be shown inside the alert message.

document.querySelector(".first").addEventListener('click', function(){ swal("Our First Alert"); }); document.querySelector(".second").addEventListener('click', function(){ swal("Our First Alert", "With some body text!"); }); document.querySelector(".third").addEventListener('click', function(){ swal("Our First Alert", "With some body text and success icon!", "success"); });

Configuration Options to Customize Alerts

If you simply want to show some basic information inside an alert box, the previous example will do just fine. However, the library can actually do a lot more than just simply show users some text inside an alert message. You can change every aspect of these alert messages to suit your own needs.

We have already covered the title, the text, and the icons inside a sweet alert message. There is also an option to change the buttons inside it and control their behavior. By default, an alert will only have a single confirm button with text that says "OK". You can change the text inside the confirm button by setting the value of the confirmButtonText property. If you also want to show a cancel button in your alert messages, all you have to do is set the value of showCancelButton to true. The text inside the cancel button can be changed using the cancelButtonText property.

Each of these buttons can be given a different background color using the confirmButtonColor and cancelButtonColor properties. The default color for the confirm button is #3085d6, while the default color for the cancel button is #aaa. If you want to apply any other customization on the confirm or cancel buttons, you can simply use the confirmButtonClass and cancelButtonClass properties to add a new class to them. Once the classes have been added, you will be able to use CSS to change the appearance of those buttons. You can also add a class on the main modal itself by using the customClass property.

If you interacted with the alert messages in the first example, you might have noticed that the modals can be closed by pressing either the Enter or Escape key. Similarly, you can also click anywhere outside the modal in order to dismiss it. This happens because the value of allowEnterKey, allowEscapeKey, and allowOutsideClick is set to true by default.

When you show two different buttons inside a modal, the confirm button is the one which is in focus by default. You can remove the focus from the confirm button by setting the value of focusConfirm to false. Similarly, you can also set the focus on the cancel button by setting the value of focusCancel to true.

The confirm button is always shown on the left side by default. You have the option to reverse the positions of the confirm and cancel buttons by setting the value of reverseButtons to true.

Besides changing the position and color of buttons inside the alert messages, you can also change the background and position of the alert message or the backdrop around it. Not only that, but the library also allows you to show your own custom icons or images in the alert messages. This can be helpful in a lot of situations.

You can customize the backdrop of a sweet alert using the backdrop property. This property accepts either a Boolean or a string as its value. By default, the backdrop of an alert message consists of mostly transparent gray color. You can hide it completely by setting the value of backdrop to false. Similarly, you can also show your own images in the background by setting the backdrop value as a string. In such cases, the whole value of the backdrop string is assigned to the CSS background property. The background of a sweet alert message can be controlled using the background property. All alert messages have a completely white background by default.

All the alert messages pop up at the center of the window by default. However, you can make them pop up from a different location using the position property. This property can have nine different values with self-explanatory names: top, top-start, top-end, center, center-start, center-end, bottom, bottom-start, and bottom-end.

You can disable the animation when a modal pops up by setting the value of the animation property to false. The library also provides a timer property which can be used to auto-close the timer once a specific number of milliseconds have passed.

In the following example, I have used different combinations of all the properties discussed in this section to create four different alert messages. This should demonstrate how you can completely change the appearance and behavior of a modal created by the SweetAlert2 library.

document.querySelector(".first").addEventListener("click", function() { swal({ title: "Show Two Buttons Inside the Alert", showCancelButton: true, confirmButtonText: "Confirm", confirmButtonColor: "#00ff99", cancelButtonColor: "#ff0099" }); }); document.querySelector(".second").addEventListener("click", function() { swal({ title: "Are you sure about deleting this file?", type: "info", showCancelButton: true, confirmButtonText: "Delete It", confirmButtonColor: "#ff0055", cancelButtonColor: "#999999", reverseButtons: true, focusConfirm: false, focusCancel: true }); }); document.querySelector(".third").addEventListener("click", function() { swal({ title: "Profile Picture", text: "Do you want to make the above image your profile picture?", imageUrl: "https://images3.imgbox.com/4f/e6/wOhuryw6_o.jpg", imageWidth: 550, imageHeight: 225, imageAlt: "Eagle Image", showCancelButton: true, confirmButtonText: "Yes", cancelButtonText: "No", confirmButtonColor: "#00ff55", cancelButtonColor: "#999999", reverseButtons: true, }); }); document.querySelector(".fourth").addEventListener("click", function() { swal({ title: "Alert Set on Timer", text: "This alert will disappear after 3 seconds.", position: "bottom", backdrop: "linear-gradient(yellow, orange)", background: "white", allowOutsideClick: false, allowEscapeKey: false, allowEnterKey: false, showConfirmButton: false, showCancelButton: false, timer: 3000 }); });

Important SweetAlert2 Methods

Initializing different sweet alert messages to show them to users is one thing, but sometimes you will also need access to methods which control the behavior of those alert messages after initialization. Fortunately, the SweetAlert2 library provides many methods that can be used to show or hide a modal as well as get its title, text, image, etc.

You can check if a modal is visible or hidden using the isVisible() method. You can also programmatically close an open modal by using the close() or closeModal() methods. If you happen to use the same set of properties for multiple alert messages during their initialization, you can simply call the setDefaults({configurationObject}) method in the beginning to set the value of all those properties at once. The library also provides a resetDefaults() method to reset all the properties to their default values.

You can get the title, content, and image of a modal using the getTitle(), getContent(), and getImage() methods. Similarly, you can also get the HTML that makes up the confirm and cancel buttons using the getConfirmButton() and getCancelButton() methods.

There are a lot of other methods which can be used to perform other tasks like programmatically clicking on the confirm or cancel buttons.

Final Thoughts

The SweetAlert2 library makes it very easy for developers to create custom alert messages to show to their users by simply setting the values of a few properties. This tutorial was aimed at covering the basics of this library so that you can create your own custom alert messages quickly. 

To prevent the post from getting too big, I have only covered the most commonly used methods and properties. If you want to read about all the other methods and properties which can be used to create advanced alert messages, you should go through the detailed documentation of the library.

Don't forget to check out the other JavaScript resources we have available in the Envato Market, as well.

Feel free to let me know if there is anything that you would like me to clarify in this tutorial.

Categories: Web Design

Summer On Your Desktop: Fresh Wallpapers For July 2018

Smashing Magazine - Sat, 06/30/2018 - 05:00
Summer On Your Desktop: Fresh Wallpapers For July 2018 Summer On Your Desktop: Fresh Wallpapers For July 2018 Cosima Mielke 2018-06-30T14:00:55+02:00 2018-07-11T12:36:25+00:00

For most of us, July is the epitome of summer. The time for spending every free minute outside to enjoy the sun and those seemingly endless summer days, be it in a nearby park, by a lake, or on a road trip, exploring unfamiliar places. So why not bring a bit of that summer joy to your desktop, too?

In this post, you’ll find free wallpapers for July 2018 created by artists and designers from across the globe as it has been our monthly tradition since more than nine years already. Please note that some of the wallpapers come in two versions as usual (with and without a calendar for the month), while the best-of selection at the end of the post only covers the non-calendar versions. Have a great July — no matter what you have planned!

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?
Design Your Own Wallpaper

Igor Izhik has designed quite a lot of wallpapers for our monthly challenge. If you would like to get started yourself, check out his article in which he shares how he approaches all stages of the process as well as useful tips and tricks for creating illustrations in Adobe Illustrator. Get creative! →

Meet Smashing Book 6 with everything from design systems and accessible single-page apps to CSS Custom Properties, Grid, Service Workers, performance, AR/VR and responsive art direction. New frontiers in front-end and UX with Marcy Sutton, Harry Roberts, Laura Elizabeth and many others.

Table of Contents → Heated Mountains

“Warm summer weather inspired the color palette.” — Designed by Marijana Pivac from Croatia.

Triumphant July

“This summer started out hot, with eyes wide open and monitoring the World Cup in Russia. We wanted to decorate your desktop this July with this colorful Red Square illustration, just before the eagerly anticipated finale, hoping to see the trophy awarded to our players.” — Designed by PopArt Studio from Serbia.

Robinson Cat

Designed by Ricardo Gimenes from Sweden.

All You Need Is Ice Cream

“July is National Ice Cream Month! National Ice Cream Day is celebrated on the 3rd Sunday in July. On this day people celebrate with a bowl, cup or cone filled with their favorite flavor of ice cream. Share some ice cream and some love this month with my wallpaper!” — Designed by Melissa Bogemans from Belgium.

A Mighty Woman With A Torch

“Last year we visited NYC for the first time during the 4th of July. I took many photos of Lady Liberty and was so deeply inspired by her message.” — Designed by Jennifer Forrest from Indiana.

Night Sky Magic

Designed by Ricardo Gimenes from Sweden.

Fly Forever

“Challenges are a part of parcel of life. Surpassing each challenge would be difficult, but it is you who decide whether to keep going or shun the drive. No matter whatever circumstance you are in, ignite your mind and just fly forward into action to achieve what you are capable of and what is beyond your reach.” — Designed by Sweans from London.

Smile, It’s Summer

“July brings me to summer, and last year in summer, I went to Salzburg, Austria where I took this photo. So every beginning of summer, I think about Salzburg and how sunny and warm it was there.” — Designed by Ilse van den Boogaart from The Netherlands.

Even Miracles Take A Little Time

“‘One day, the people that didn't believe in you will tell everyone how they met you.’ Believe in Yourself. Give it all that you can, take your own sweet time and be your own miracle!” — Designed by Binita Lama from India.

July Favorites

Lots of beautiful wallpapers have been created in the nine years since we embarked on our wallpapers adventure. And since it’d be a pity to let them gather dust, we once again dived deep into our archives on the lookout for some July treasures. Please note that these wallpapers, thus, don’t come with a calendar.

A Flamboyance Of Flamingos

“July in South Africa is dreary and wintery so we give all the southern hemisphere dwellers a bit of colour for those grey days. And for the northern hemisphere dwellers a bit of pop for their summer! The Flamboyance of Flamingos is part of our ‘Wonderland Collective Noun’ collection. Each month a new fabulous collective noun is illustrated, printed and made into a desktop wallpaper.” — Designed by Wonderland Collective from South Africa.

Summer Essentials

“A few essential items for the summertime weather at the beach, park, and everywhere in-between.” — Designed by Zach Vandehey from the USA.

Summer Cannonball

“Summer is coming in the northern hemisphere and what better way to enjoy it than with watermelons and cannonballs.” — Designed by Maria Keller from Mexico.

Mason Jar

“Make the days count this summer!” — Designed by Meghan Pascarella from the USA.

Birdie Nam Nam

“I have created a pattern that has a summer feeling. For me July and summer is bright color, joy and lots of different flowers and birds. So naturally, I incorporated all these elements in a crazy pattern.” — Designed by Lina Karlsson from Sweden.


Designed by Tekstografika from Russia.

Summer Never Ends!

“July is a very special month to me — it’s the month of my birthday and of the best cherries.” — Designed by Igor Izhik from Canada.

World Ufo Day

“The holiday dedicated to those who study the phenomena that have no logical explanation, and the objects, which is attributed to an extraterrestrial origin.” — Designed by Cheloveche.ru from Russia

Captain Amphicar

“My son and I are obsessed with the Amphicar right now, so why not have a little fun with it?” — Designed by 3 Bicycles Creative from the USA.

Tropical Lilies

“I enjoy creating tropical designs, they fuel my wanderlust and passion for the exotic. Instantaneously transporting me to a tropical destination.” — Designed by Tamsin Raslan from the USA.

Day Turns To Night

Designed by Xenia Latii from Germany.

Eternal Summer

“And once you let your imagination go, you find yourself surrounded by eternal summer, unexplored worlds and all-pervading warmth, where there are no rules of physics and colors tint the sky under your feet.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Taste Like Summer!

“In times of clean eating and the world of superfoods there is one vegetable missing. An old forgotten one. A flower actually. Rare and special. Once it had a royal reputation (I cheated a bit with the blue). The artichocke — this is my superhero in the garden! I am a food lover — you too? Enjoy it — dip it!” — Designed by Alexandra Tamgnoué from Germany.

Road Trip In July

“July is the middle of summer, when most of us go on road trips, so I designed a calendar inspired by my love of traveling and summer holidays.” — Designed by Patricia Coroi from Romania.

World Chocolate Day

Designed by Cheloveche.ru from Russia.

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!

Categories: Web Design

How To Craft The Perfect Web Developer Ré­su­mé

Smashing Magazine - Fri, 06/29/2018 - 05:00
How To Craft The Perfect Web Developer Ré­su­mé How To Craft The Perfect Web Developer Ré­su­mé Aditya Sharma 2018-06-29T14:00:26+02:00 2018-07-11T12:36:25+00:00

Did you know that your ré­su­mé could be the reason that’s holding you back from that 150k+ job that you know you deserve? This guide is dedicated to all the web developers out there and will demonstrate how you can create a successful ré­su­mé that will get you more shortlists than you can fathom. If it’s a piece of paper that is standing between you and your dream job, it’s time to show who’s the boss.

Our guide to making a killer ré­su­mé will broadly talk about the following:

  1. Why Would A Web Developer Need A Ré­su­mé In The First Place?
  2. Ré­su­mé Format: Sorting Out The Key Elements Of A Web Developer Ré­su­mé
  3. Professional Summary
  4. Technical Skills
  5. Managerial Skills
  6. Professional Experience
  7. Education Section In A Web Developer Ré­su­mé
  8. Technical Projects
  9. Additional Sections In The Web Developer Ré­su­mé
  10. ATS Optimization
  11. Key Takeaways
  12. A Sample Ré­su­mé To Get You Started
Why Would A Web Developer Need A Ré­su­mé In the First Place? I don’t need a ré­su­mé! I’ll have a job before I wake up tomorrow!

I sighed. He was a brilliant web developer and we both knew it. He felt he was wasting away his life and deserved something better — I agreed. He gave his two weeks’ notice and was under the impression that a new job as well as a better profile will just land on his lap.

But he had ignored that singular piece of document which has a track record of making or breaking people’s lives — the humble ré­su­mé.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

As part of my job, I go through dozens of ré­su­més on a daily basis. I had seen his ré­su­mé as well. I wish I had the heart to tell him that just being a kickass developer isn’t enough — you have to convince the same to the recruiter on a 1-pager. And while accomplishing a task like that is not rocket science, it’s not a walk in the park either.

Web developers know that a lot depends on networking and client recommendations, so a ré­su­mé usually takes a backseat in most cases. Couple that with a growing demand and you know there won’t ever be a shortage of projects.

So why to waste time on a web developer ré­su­mé? Let’s take a moment and study this graph below:

Graph showing the demand for web developers from 2012-2016. (Large preview)

The data is taken from Indeed.com, and if you notice the trend in the past few years, you’ll observe two main facts:

  • With the advent of web-based startups, the peak of web development was 5-6 years ago and has either been steady or in a decline.
  • For jobs that require web development as the only skill, the demand is steady, as of now.

Additionally, going by Forbes’ analysis, fields like AI, AR and Data Science are the new up-and-coming stalwarts in the tech industry. Influencers and tech experts strongly believe that these domains have the ability to revamp the way we’ve been doing things until now. So while the demand for web developers is steady right now, the picture is not all rosy.

Sure, as a web developer, you are confident that you’ll never have a shortage of projects. You have a list of happy clients which you served in the past and you believe that their network is enough to sustain you. But if you look at the tech industry in general and see how trends shape up and die down at a breathtaking pace, you’ll realize that this approach is probably not the wisest one.

You think you’ll always have a job or a project because you specialize in something which is in huge demand, but how long do you want to be at the receiving end of client’s tirades? Wouldn’t you want flexible hours, remote work, or professional clients for a change who know what they want?

Wouldn’t you want to 1-up your game from an 80k job to a 150k+ profile?

That’s where your ré­su­mé comes in.

Believe us, we’ve seen how that single piece of a document has changed people’s lives — the individual remains the same, with his certifications, qualifications, previous profiles and what not, but just revamping everything about that individual on paper suddenly transforms the person himself.

We’ve seen it because we’ve done it.

And if the demand for web developers is there, you don’t think you’re the only one who noticed that, right? For every project that you willingly drop or miss, you’ll find ten developers who will pick it up before it even hits the ground. You have a fair idea of the cutthroat competition which is out there, but continue reading and you’ll find out that the competition is not even the tip of the iceberg. The actual recruitment process and the role which a ré­su­mé plays in it might be an eye-opener for you.

Which is why, without further ado, let’s dive in.

2. Re­su­mé Format: Sorting Out The Key Elements Of A Web Developer Ré­su­mé

Broadly speaking, your web developer ré­su­mé will contain the following sections:

  • Contact info
  • Professional Summary
  • Key Skills (Technical + Managerial)
  • Professional Experience
  • Education
  • Projects
  • Extra: Social profiles
  • Interests, Hobbies, Extra-curricular achievements (Optional).

How do you arrange all these sections? What’s the order that you are supposed to follow? Are all of these sections necessary?

That’s where understanding ré­su­mé layouts and formats becomes important.

A ré­su­mé is either Reverse-Chronological, Functional or Hybrid.

2.1 Reverse-chronological

As the name suggests, it starts off by listing your current or last-held profile and continues from there until you reach the part about your ‘Education.’

  • It’s ATS-friendly (more on ATS below) and allows you to emphasize upon your current work profile and achievements. It’s easy to create and is considered to be the standard format for most ré­su­més.
  • The only downside is that in case you are a frequent job-switcher, it might look bad on paper. There’s no way to hide career gaps in a reverse-chronological ré­su­mé.

Below is an example of the same.

Format for a ‘reverse-chronological’ ré­su­mé. (Large preview) 2.2 Functional Ré­su­més

It only lists the companies where you worked at without diving into the details of your actual work profile. Instead, you create a separate section in which you group all your points under relevant skills.

It can be used by people to hide gaps in their career trajectory, but we aren’t fans of this format, simply because you can merely disguise your gaps but sooner or later, it’s bound to show up. It’s always better to be honest, always.

Here’s an example of a functional ré­su­mé. If you’ll notice, it doesn’t allow the recruiter to see your career trajectory or how you evolved to reach where you are.

Format for a ‘functional’ ré­su­mé. (Large preview) 2.3 Hybrid (Combination) Ré­su­més

This format is exactly similar to the reverse-chronological format apart from the fact that in the ‘Professional Experience’ section, the points are grouped by the sills that they represent.

A format like this allows the recruiter to scan relevant points only based on the skills they are looking for. If you can customize your ré­su­mé to the job description, you can direct the attention of the recruiter to where you want. This is the biggest advantage of using this ré­su­mé format.

Another subset of ‘hybrid’ ré­su­més is where you extract all your achievements and create a separate section of ‘Summary of Skills.’ This allows you to create a highly targeted ré­su­mé, focussing only on the skills which you want to showcase to the recruiter.

You’ll find examples of both down below.

Format for a combination (hybrid) ré­su­mé. (Large preview) Combination ré­su­mé containing an additional ‘Summary of Skills.’ (Large preview) 3. Professional Summary

We encountered innumerable people who spent countless hours and days polishing their ‘Ré­su­mé Objective’ section. Are you also one of them?

What is the difference between the Professional Summary and ré­su­mé Objective section? We like to misappropriate a JFK quote to answer all queries regarding this conundrum:

Ask not what the company can do for you, but what you can do for the company.

Meet Vanessa. She’s the Head Recruiter at a top-notch IT firm and is now looking for an awesome web developer. Her email is flooded with ré­su­més and they all look the same. She’s tired of seeing people listing out what they want — it looks more like a shopping list than a professional ré­su­mé. Surprisingly, all of them are ‘hard-working’ and possess ‘excellent communication skills’ and are ‘looking for a challenging leadership position’.

— yawn —

Then she opens your ré­su­mé which contains a crisp 4-5 line summary detailing your skills and how you plan to apply those skills for achieving organizational goals. You did your research where you identified high-priority needs of the company, and you’ve mentioned how you plan to address them through the skills that you possess.

She sits up and stops thinking about Game of Thrones for a second. She’s hooked and now wants to meet you in person.

Mission accomplished.

Let us clarify that through an example. Check out a couple of professional summaries and try to see which one delivers greater impact.

I’m a 4 years experienced Web Developer specializing in front-end who’s skilled in ASP.NET, Javascript, C++, HTML, CSS, PHP & MySQL. I am looking for the position of a web developer in a company which will utilize my excellent team management and communication skills.

Technically, there’s nothing wrong with this, just like technically there was nothing wrong with the Star Wars prequels. Now check this out:

5+ years experienced, dynamic and detail-oriented Full Stack Web Developer with a track record of spearheading teams to engineer user-centric solutions for achieving breakthrough efficiency and driving client satisfaction. Highly skilled in end-to-end SDLC and effectively prototyped 20+ product features annually for XYZ to achieve a 25% reduction in costs. Registered unparalleled customer satisfaction levels and received the 2017 Employee of the Year Award for achieving a record-breaking NPS score out of 300+ employees.

See the difference? If you’ll notice, the summary doesn’t include a detailed list of his technical proficiency. It’s better to reserve that for a separate Technical Skills section. The Summary is there to give a bird’s-eye view of your professional career and should be a reason for the recruiter to continue with the rest of your Ré­su­mé.

Additionally, in the first example, the summary ended with an ‘Objective’ statement which serves no purpose to the recruiter. But highlighting your achievements (in the second example) will make the reader pause...and if you manage to do that, congratulations — you are already one step ahead of a majority of applicants out there.

Are you wondering what the kind of professional summary listed above is a bit unreal? What if you are an entry-level Web Developer with no concrete achievement to boast of? What do you do then?

In that scenario, and only in that scenario, in the absence of any significant work experience, you can go for an Objective section in case of a Professional Summary. And there can be multiple ways of approaching the same.

Goal-oriented Web Developer with a Bachelor's degree in Computer Science and looking to enhance my professional experience with an IT company specializing in web development. Armed with a deep sense of responsibility and possessing very high levels of enthusiasm to give my 110% for any endeavor.

Desperate much?

Right off the bat, it’s always better if the entire ré­su­mé is in third-person — that means no references to ‘I’, ‘me’ or ‘mine.’ It’s always ‘possessing a track record,’ not ‘I possess a track record.’

Additionally, the above summary doesn’t inspire confidence. You can be a fresher and also sound professional without looking like you’ll die of starvation if you don’t get the job. Here’s how:

Dynamic and detail-oriented Web Developer with a knack for conceptualizing and delivering elegant, user-friendly solutions effectively and efficiently. Possesses a track record of developing an e-commerce mobile app, a CRM online portal and a fully-functional website for a nonprofit working with underprivileged children. Armed with an extensive understanding of end-to-end SDLC and cloud computing. Regular participant and organizer of local hackathons and web developer meetups.

This only shows that you don’t need extensive experience with high-end corporates to make a killer professional summary. You only need to understand the motivations of the recruiter who’s hiring.

4. Technical Skills

Like mentioned earlier, for a technical ré­su­mé like that of a web developer, it’s better to reserve a separate section for all your technical expertise. But even in that scenario, there are ways in which you can optimize the space available to deliver greater impact.

Most web developer ré­su­més that we see usually give a long list of their technical proficiency. In their quest to make the list comprehensive and all-inclusive, they often compromise on readability. Let us clarify that through an example:

Jenkins Maven OOJS CiCd Docker Angular 4 Apache Tomcat 6 Bitbucket Git Jira Chrome developer tools HTML5 Kendo UI BootStrap Mozilla Firebug (debugger) CSS3.0 MySQL JQuery AJAX JavaScript PHP

A layman would think that the skills are all neatly arranged — surely there’s no other way to make it even better, is there?

Well, as a matter of fact, there is. In case of any dilemmas, it’s always better to place yourself in the shoes of the recruiter and come up with ways to make the job of evaluating you even easier.

While there’s nothing wrong with the way the skills are mentioned above, there’s another way through which you can present the same information and make it look even more relevant.

Web Technologies & Frameworks: Angular 4, HTML5, CSS3.0, Kendo UI, PHP

Scripts/UI: JavaScript, OOJS, JQuery, AJAX, BootStrap

Database and ORM: MySQL

Web Debug Tools: Mozilla Firebug (debugger), Chrome developer tools

Application/Web Server: Apache Tomcat 6

Versioning and other tools: Git, Bitbucket, Jira

Deployment Tools: Docker, Maven, CiCd, Jenkins


By merely assigning sub-headings to the skills that you possess, you made the recruiter’s job easier. Now she only has to scan the sub-headings to quickly find if what she’s looking for is there in your ré­su­mé or not.

5. Managerial Skills

Many web developers stop at ‘Technical Skills’ and continue with their ‘Professional Experience.’ True, for a tech profile, technical skills play a major role and acts as a foundation for whether or not you’ll be shortlisted or not.

But remember when we talked about the difference between an 80k profile where you are dealing with nonsense clients and a 180k+ profile with flexible hours? The ‘Key Skills’ section containing your managerial and leadership skills will play a critical role in bridging that gap. Web developers are a dime a dozen — from a recruiter’s perspective; it’s cheaper to just hire a freelancer for their development work, if that’s what they are looking for.

But they are not, are they? They are looking for a full-time profile. What do you think would be the difference between the two?

Ownership. Leadership.

Companies aren’t just looking for a robot who can be programmed to do basic tasks. They are looking for future leaders who can take over a few years down the line. And it’s your task to convince the recruiter that you are such an individual. Any freelancer working on an hourly basis will possess the technical skills that you do. But it’s your leadership and managerial skills that will help you make it.

Coming to your non-technical skills, it’s always better if you prioritize hard, professional skills over soft skills like ‘communication’ and ‘self-motivation.’ Why? Simply because there’s no way to prove or quantify the same. But you can always add skills like ‘Issue Resolution,’ ‘Leadership’ or ‘Project Management’ and then proceed with showcasing the same in your ‘Professional Experience’ section.

A simple rule of thumb while mentioning your managerial skills is “Show, Don’t Tell.” It’s always better if you are able to substantiate the skills that you mention with concrete points down below.

Don’t just say that you are a leader. Show that you’ve led teams to achieve departmental goals.

Don’t say that you are good in negotiating. Show how your negotiation skills led an x% reduction in costs.

A few examples of managerial skills which you can include in your ré­su­mé are below.

Front-End Development Agile Methodology Code Optimization Documentation & Reporting Requirement Gathering UI Enhancement Module Management Issue Resolution Stakeholder Management Client Relationship Management Project Management Team Leadership

Mention only those skills which you can elucidate in your ré­su­mé. There’s no point in adding a random list of skills which you’ll find insanely difficult to justify at the time of your interview.

How do you identify all those managerial skills which are relevant?

The ‘Job Description.’ That is your Bible for your entire ré­su­mé writing process.

Look for non-technical skills (both managerial and soft skills) and see if they can be included. Only add them if you think you can justify them, either in points below or at the time of your interview. Nothing will hurt your chances more badly than blatantly lying on your Ré­su­mé.

6. Professional Experience

How do you go about framing points for your ré­su­mé?

The ‘Professional Experience’ section is going to be the most critical section of your ré­su­mé. It’s the fuel of your car — the body and looks are alright, but the car won’t move an inch without juice. This section is that juice for your ré­su­mé.

A handy resource for you would be the ‘Job Description.’ Your task is to align the entire ré­su­mé along the lines of what the recruiter is looking for. Your ré­su­mé should look like it’s in response to the JD, that you possess the ability to resolve all the issues which are inherently mentioned in that document.

6.1 Master CV

A better (but tiring) way to proceed would be to make a MasterCV first. It’s a time-consuming process, but we can guarantee that it’s going to give you rich dividends for the rest of your jolly professional career.

We are assuming that you never actually got a chance to sit down with your ré­su­mé, to look at it and figure out what’s wrong with it and how it can be better. And it’s perfectly alright if that’s the case. Most people have that attitude when it comes to their ré­su­mé. It’s always a last-minute rush, which means that there’s almost always something that you’ll inevitably miss, that there’s always a chance that it can be made better.

MasterCV is how you avoid that situation, it’s an important piece in getting you that 150k+ profile. It’s basically a list of literally everything that you have ever done till date. And we mean everything.

A masterCV is for your own use. No one is going to see it. There’s no need to structure it or keep it to two pages — it can be a 10-page long list of bullet points consisting of every achievement (curricular, extra-curricular, professional, achievements around your hobbies or interests — you name it) in your entire life, or it can be full of deathly long paragraphs. The idea is to keep a single document containing all your achievements till date, and regularly updating it.

What do you think happens when you update your ré­su­mé in a last-minute rush? You only add those points which you are able to recollect at that moment. But if you think about it, your tenure at any organization must be filled with tiny milestones and achievements (i.e. milestones which get missed out when you update your ré­su­mé in a rush).

Once you have your masterCV ready, take out the JD of the profile that you are targeting and scan your masterCV for points which can be interpreted and rephrased along the lines of what the recruiter is looking for. The idea is to customize your ré­su­mé according to the job, and not send a standard ré­su­mé for any and all profiles that you come across.

As you continue to update your masterCV, years down the line when you’ll be applying for something else, you can again come back to that same document and pick out points for tailoring your ré­su­mé to that new profile.

6.2 Cause-Effect Relationship: The Princeton Formula To Rule Them All

Another thing to keep in mind is the cause-effect relationship. Most people find themselves at a loss when it comes to filling out actual points for the job which they were doing. They know what they did, but they can’t write it down in coherent points. When that happens, they resort to a typical JD for jobs like the one they themselves were doing, and then morph those points into their own ré­su­mé.

A fundamental thing which is wrong with this approach is that a typical JD is responsibility-based, while your ré­su­mé should be achievement-based. A JD contains a list of things which the recruiter expects a candidate should be capable of, while your ré­su­mé will contain your achievements around those responsibilities. There’s a stark difference.

The good thing is that a vast majority of applicants resort to this approach. So a tiny deviation from this well-treaded path will automatically elevate your chances of getting shortlisted.

How do you do that? By making sure that there’s a coherent cause-effect relationship in each point. A foolproof way to make sure that you are able to do that is the Princeton formula along the lines of:

A + P + R = A

Action Verb + Project + Result = Accomplishment

If you are able to incorporate the essence of this formula in all your ré­su­mé points, trust us, 99% of your job is done.

Most applicants either mention their responsibilities or their achievements. But this formula ensures that not only you mention these two parameters; you also detail the quantifiable impact of your achievements. Instead of wrapping your achievements around your profile, showcase the impact that you achievement had on the organization. When you do that, you instantly enhance your role from someone who just did what they were told, to someone who took ownership of their responsibilities and delivered an impact at the macro level.

An example of the Princeton formula in action:

Spearheaded a team of 5 Junior Developers to effectively execute 11 projects with 100% on-time delivery while achieving a cost-reduction of 20% and registering CSAT levels of 4.88/5.00

This point is so much better than a generic point along the lines of:

Worked on various projects to decrease costs and achieve client satisfaction.

A point like this clearly highlights the quantifiable impact that you were able to achieve. Beginning a point with an action/power verb (a list of which you can find in the Princeton document linked above, or you can simply google the same) instantly magnifies the impact of that point, as opposed to most other candidates who often tend to ‘manage’ everything.

That’s the kind of point which makes the recruiter pause, and believe us, when a Hiring Manager is going through dozens of ré­su­més on a daily basis, it’s a superhuman task to make her pause and look at your ré­su­mé. Your task is to do just that, and that’s how you do it.

6.3 Bucketing/Subheadings

Another critical weapon in your arsenal to make a stellar Developer ré­su­mé is bucketing, or sub-headings.

Merely framing immaculate points will only get you so far. Let’s say you picked apart your entire experience in your previous profile and came up with this:

  • Developing client-side libraries across both iOS and Android to enable usage of the offline sync feature for the app developer,
  • Envisioned & developed the common network layer for Android to accomplish a reduction in the SDK size by ~20%,
  • Commissioning the development of Logging Framework across all platforms including iOS, Android & Windows,
  • Achieved the ‘Team Excellence Award’ & played a critical role in applying for a patent based on the logging library,
  • Conceptualizing and developing a library for the company to reduce additional costs involved in using third-party libraries,
  • Spearheading a team of ~20 to conceptualize and effectively implement the Mark for Upload feature for the company,
  • Proposing a common network layer for all network calls to be used by the product to effectively optimize SDK size.

Sure, in their individual capacity, the points are meticulously framed and seem to follow the Princeton formula uniformly. But the entire work experience itself looks like a wall of text which will make the recruiter groan the moment she sees it. You don’t want that, do you?

Now look what happens when we take the same points and work our magic to make it a breeze for the recruiter, without changing a thing about the points themselves:

Team Management & Leadership

  • Spearheading a team of ~20 to conceptualize and effectively implement the Mark for Upload feature for the company
  • Commissioning the development of Logging Framework across all platforms including iOS, Android & Windows.

Library Management & Process Optimization

  • Conceptualizing and developing a library for the company to reduce additional costs involved in using third-party libraries
  • Developing client-side libraries across both iOS and Android to enable usage of the offline sync feature for the app developer
  • Proposing a common network layer for all network calls to be used by the product to effectively optimize SDK size.

Key Achievements

  • Envisioned & developed the common network layer for Android to accomplish a reduction in the SDK size by ~20%
  • Achieved the ‘Team Excellence Award’ & played a critical role in applying for a patent based on the logging library.

If that isn’t mic-drop stuff, we don’t know what is.

In a single instant, you transformed the entire professional experience by neatly arranging all the points into buckets or sub-headings. Consequently, the recruiter won’t have to go through the individual points — merely perusing through the buckets will serve the purpose. And to further sweeten the deal, you bolded relevant words and phrases to make the recruiter’s job even easier? That’s what you want, isn’t it? If you make the recruiter’s job easier, she’ll surely return the favor.

6.4 Professional Experience Section for an Entry-level Web Developer

But again, does the above point look a bit unreal? What do you do if you are a fresher with no significant professional experience to mention?

Believe us, possessing years of work experience is not the only way to showcase that you’ll be fit for the job. More than the achievement itself, if you are able to demonstrate that you have the right attitude, your job is done.

So how do you phrase your professional experience in a way that will make you stand in comparison to a Developer armed with a few years of experience?

  • Include projects for which you freelanced in your career till date,
  • Bolster your Github profile and code that you’ve posted there,
  • Include all open-source projects you have contributed to,
  • Mention any hackathons or local developer meetups in which you participated or helped organize.

PRO-TIP: If you are looking for a short-term solution to beef up your entry-level web developer ré­su­mé, just look up for some open-source projects online. You’ll find hundreds of projects to which you can contribute, so you can incorporate the same on your ré­su­mé.

Meet Chad, an entry-level web developer looking for a high-end profile. After hours of deliberations and brainstorming, this is what he came up with:

Entry-level web developer possessing a BA Degree in Computer Science and armed with an eager-to-learn approach where I can deploy my excellent development skills.

— yawning continues —

Since you know that you only get one shot at the profile of your dreams, why would you sabotage your chances if you can do this instead:

  • Developed a webapp portal for an e-travel firm to increase the client’s sales by 48%,
  • Enabled the Smiles Dental Clinic to measure patient satisfaction scores through an online form. Assisted in boosting CSAT levels by 7 points within 2 months,
  • Independently developed a website for the local Baseball league championship to increase streaming sales by 50%,
  • Created a webapp to facilitate easy donations through Facebook & Whatsapp for Friendicoes Shelter for the Homeless. Raised donation levels by 45% & helped rehabilitate 25 people from the street.

That’s Vincent. He knew he was stuck in a vicious cycle wherein he needed work experience to gain work experience. So he took matters in his own hands and scouted the digital space for any and every project that he could find. Within a span of 4 months, he executed 4 such projects, strengthened his ré­su­mé to make it at par with a professional developer, and is now leading a team of his own at a top-notch firm.

7. Education Section In A Web Developer Ré­su­mé

This section is often underrated by most developers. Shouldn’t the professional experience and projects be the focus on your ré­su­mé?

Yes. But that doesn’t mean you can scribble your educational qualifications on the back of a napkin and staple it on your ré­su­mé.

You can follow the conventional path and include your degree, college, and year of passing.

But remember. You only get one shot at this.

Let us clarify that through an example:

BA — Computer Science
University of Syracuse, ‘16
GPA 3.9

Um. Okay. Again, it’s not technically wrong. But try this:

BA — Computer Science
University of Syracuse, 2013-2016

  • Utilized a deep-rooted passion for cloud technologies by contributing to the open-source AWS Project for New York University
  • Wrote a column on ‘Is AI the Industrial Revolution of the 21st Century’ for the college magazine
  • Developed the Salesforce Contacts mobile app to streamline operations & performed Jasmine Unit Tests in the TDD process
    • Deployed the MVVM Architecture for boosting ability to build scalable apps & optimized usage of Pagination & Sorting

We don’t have to elucidate the differences, do we? The best part is that it’s easily doable. It’s not necessary that your ‘Education’ section should look like that — the points above are just examples. But if you sit down and brainstorm with yourself, you’ll definitely come up with a list of something which you can quantify and incorporate in your ré­su­mé — participation in clubs, internships, freelance projects, college competitions, publications... we can go on really.

8. Technical Projects

If you’ve been following our tips until now, you can include them all to make a brilliant ‘Projects’ section for your web developer ré­su­mé. Combining the Princeton formula with bucketing and bolding, this is what a sample ‘Projects’ section looks like:

A few obvious pointers that this sample highlights are as follows:

  • For every project, include an ‘Environment’ subheading which lists out all the tools and technologies which were deployed for executing that project. If there are a lot, you can categorize them into further classes (like we did with the ‘Technical Skills’ section).
  • A description of the company/client helps put the project in perspective. The idea is to showcase to the recruiter that you were working for a reputed company. You can include figures around number of employees, revenue, etc. to make sure it comes out like that.
  • Industry standards dictate the location and time period to be aligned to the right, with the company and project title aligned to the left.
  • Adding buckets or subheadings is an effective way to incorporate the skills and methodologies which the recruiter is looking for. You can scan the ‘Job Description’ for skills which the recruiter is targeting and phrase your points to ensure that the bucket (which goes on top of the points, meaning greater visibility) includes those skills.
  • Try to reserve a separate ‘Key Achievements’ section for as many projects as you possibly can, with quantifiable impact to showcase the depth of your contribution.
Key Projects section for the ré­su­mé of a Web Developer. (Large preview) 9. Additional Sections In The Web Developer Ré­su­mé

To deliver the Oomph!-factor to your ré­su­mé, there are additional sections which you can incorporate. Recruiters know the cost of any hiring decision, and they know that if you are on-boarded, you’ll spend a greater part of your day with other team members. It’s important for them to know that you’ll gel along with the team — that’s where these additional sections come in.

You can include sections on ‘Extra-curricular Activities’, ‘Awards & Recognition’, ‘Hobbies/Interests’, and so on. It’s important to stay relevant even when you are working on these sections. Just saying you like to travel or play football won’t add any value to your ré­su­mé. Instead, quantifying your hobbies/interests will go a long way in ensuring that.

Web developers, in particular, can include their social profiles. This is a great guide containing sample developer portfolios that will inspire you to polish your own. A well-maintained Github profile, for instance, will signify that you are not a developer just because you have a degree — it means that you actually like your job and find it engaging enough to do in your free time as well.

This is a sample ‘Hobbies’ section, for instance, the likes of which we see a lot on a daily basis:

Reading, travel, photography

Surprisingly, a vast majority of applicants will have a ‘Hobbies’ section like this. This tells the recruiter nothing.

Now, check this out:

  • Convener of monthly meetings of the Webber Society of California, with 800+ members in CA and 10,000+ pan US
  • Photography: Owner and administrator of the Free Smiles Photography Page on Facebook with 7k+ likes
  • Travelled to 7 countries in the last 12 months and documented the same on my travel blog (insert link)

Maybe you don’t own a photography page with 7k+ likes, and that’s okay. The idea is to quantify even your hobbies and interests, to give an idea to the recruiter as to what that hobby means to you. Most recruiters look for people who can have a life outside of the workplace and can maintain a healthy work-life balance. If you can’t elaborate on your hobbies or interests, better to avoid that section altogether than to include it and make it look like you just wanted to fill up space.

A ‘Portfolio’ section will do wonders for your ré­su­mé. You can find projects online which would only take a couple of hours — adding something like that on your ré­su­mé will instantly boost its value. You can’t attach a million lines of code in an Appendix to your ré­su­mé to tell the recruiter that you like to code. But a healthy portfolio containing a list of happy clients and projects successfully executed will bolster your profile.

10. ATS Optimization

Ah. The dreaded ATS. You might have only heard rumors or sordid tales of it, but what exactly is the ATS?

If you’re the Head Recruiter of an MNC that receives thousands of applications on a daily basis, what are your options? To personally go through all of the ré­su­més? To hire a team the size of Denmark and have them scan ré­su­més 24/7? Or, you know, get a software to do the job for you?

Applicant Tracking Systems work a keyword matching algorithm, wherein the software matches the ré­su­mé with the keywords present in the job description. Remember that one time when you sent a ré­su­mé to a company and never heard from them? Did you curse the recruiter after that, wondering why they couldn’t bother to send a standard rejection mail? Have you considered the fact that maybe no human recruiter actually got a chance to scan your ré­su­mé? What if your ré­su­mé was rejected by the ATS even before it landed on a human’s desk?

That happens more often than you think. The solution to that isn’t stuffing your ré­su­mé with keywords. Your task isn’t to beat the ATS alone — even if your ré­su­mé is parsed by the ATS, the recruiter will take one look and trash it even before you get a chance to blink.

This is a great tool to match your ré­su­mé with the JD which you are targeting. It will give you an ATS score depending on how many relevant keywords you used in the ré­su­mé against the JD. Moreover, it will give you a list of keywords which you can include to increase your score. A lot depends on which particular ATS that the company is using. Also, remember that the ATS, at the end of the day, is operated by a human recruiter. You can only guess which keyword the recruiter will look up on the ATS, but you can cater to as many keywords as you possibly can. just to be sure.

Scan the JD to get a list of keywords which are important to the company; additionally, you can paste the entire JD in a word cloud which analyses the frequency of words used in a text. Incorporate those keywords in an organic manner without making it look like you are being blatant about it.

Reminder: ATS is just a step in the entire recruitment process. You shouldn’t compromise meaning or authenticity at the cost of ATS optimization. It would be futile if the ATS is able to parse your ré­su­mé but the recruiter sitting behind a desk thinks the ré­su­mé itself was written by a machine.

11. Key Takeaways

To recap a few critical points that we touched above:

  • A reverse-chronological ré­su­mé format is your best best. A functional or a hybrid (combination) ré­su­mé is not the best way to showcase your achievements with context and impact. A reverse-chronological ré­su­mé showcases your trajectory which gives a bird’s-eye view of your career till date.
  • In case you are not an entry-level developer, go for a professional Summary section instead of an Objective section.
  • Divide your skills into Technical and Managerial Skills. Group all your technical skills under relevant sub-headings to make the job of the recruiter (who will be a generalist and not a ‘techie’) easier. Prioritize professional skills (hard skills) over soft skills and try to elucidate the skills that you have mentioned in your ‘Professional Experience’ section.
  • A MasterCV is the ideal way if you want to break down your job-hunting process into something much more manageable — not just for your immediate requirements but for the long run.

    Having a master document containing all your achievements till date will allow you to customize your job application, instead of sending a generic ré­su­mé for all vacancies.

    And tailoring your ré­su­mé to the job application is how you beat a majority of other applicants.

  • Keep the Princeton formula in mind (Action Verb + Project + Result = Accomplishment) while you are framing points under the ‘Professional Experience’ section. This allows you to establish a cause-effect relationship which can transform your entire application.
  • Bolding and Bucketing (sub-headings) in your work-ex section will make sure you pass the 6-second test. You can use it to only highlight those achievements which you want the recruiters to notice before they dive down into your actual ré­su­mé.
  • Go for additional sections (Hobbies, Interests, etc.) only if you think it will bolster your application, or if you can provide substantial details around the same.
  • Once you are done, check the ATS score of your ré­su­mé against the job description for the profile which you are targeting to identify gaps and areas of improvement.
12. A Sample Ré­su­mé To Get You Started

Still have more doubts around the ré­su­mé-writing process? Want to share your experience of making your ré­su­mé or the job-hunt in general? Give us a shout-out in the comments and we’ll get back to you!

A complete sample ré­su­mé for a web developer (Large preview) (ra, yk, il)
Categories: Web Design

Create Interactive Gradient Animations Using Granim.js

Tuts+ Code - Web Development - Thu, 06/28/2018 - 06:36

Gradients can instantly improve the look and feel of a website, if used carefully with the right color combination. CSS has also come a long way when it comes to applying a gradient on any element and animating it. In this tutorial, we will move away from CSS and create gradient animations using a JavaScript library called Granim.js.

This library draws and animates gradients on a given canvas according to the parameters you set when creating a Granim instance. There are different methods which can be used to make your gradient respond to different user events like a button click. In this tutorial, we will learn about this library in detail and create some simple but nice gradient animation effects.

Create Solid Color Gradient Animations

Before we begin creating any gradient, you will have to include the library in your project. For this, you can either download Granim.js from GitHub or link directly to a CDN. The library version that I am using in this tutorial is 1.1. Some methods that we will discuss here were only added in version 1.1, so using an older library version when following this tutorial will not always give the expected result. Keeping these points in mind, let's create our first gradient using Granim.js.

Every time you create a new Granim instance, you can pass it an object of key-value pairs, where the key is the name of a particular property and the value is the value of the property. The element property is used to specify the CSS selector or DOM node which will point to the canvas on which you want to apply a particular gradient.

When you create a gradient animation where the colors change from a relatively light value to a darker value, it might become impossible to read some text that you have positioned on the canvas. For example, the initial gradient applied on an element might be a combination of yellow and light green. In such cases, the text of the canvas would have to be darker for users to be able to read it properly. 

Similarly, the gradient might consist of dark red and black at some other point, and in such cases the dark text would not be easy to read. Granim.js solves this problem for you by allowing you to specify a container element on which you can add the dark and light classes to style the text or other elements accordingly. The value of the elToSetClassOn property is set to body by default, but you can also specify any other container element. The dark and light class names are updated automatically based on the average color of the gradient.

The elToSetClassOn property does not work by itself. You will also have to specify a name for the Granim instance that you created using the name property. If you set the name to something like first-gradient, the name of the classes applied on the container element will become first-gradient-light or first-gradient-dark based on how light or dark the gradient currently is. This way, any element which needs to change its color based on the lightness or darkness of the gradient will be able to do so with ease.

The direction in which a gradient should be drawn can be specified using the direction property. It has four valid values: diagonal, left-right, top-bottom, and radial. The gradients that you create will not move in those particular directions—they will just be drawn that way. The position of the gradient doesn't change during the animation; only its colors do.

There is also a states property, which accepts an object as its value. Each state specified inside the states object will have a name and a set of key-value pairs. You can use the gradients property to specify different colors which should make up a particular gradient. You can set the value of this property to be equal to an array of gradients. 

Granim.js will automatically create an animation where the colors of the gradient change from one set to another. The transition between different gradients takes 5,000 milliseconds by default. However, you can speed up or slow down the animation by setting an appropriate value for the transitionSpeed property.

After the gradients start animating, they will have to come to an end at one point or another. You can specify if the gradient should then just stop there or start animating again from the beginning using the loop property. This is set to true by default, which means that the gradient would keep animating.

Each color in a gradient can have a different opacity, which can be specified using the opacity property. This property accepts an array to determine how opaque each color is going to be. For two gradient colors, the value can be [0.1, 0.8]. For three gradient colors, the value can be [1, 0.5, 0.75], etc.

You also have the option to specify the time it takes for the gradient animation to go from one state to another using the stateTransitionSpeed. This is different from the transitionSpeed property, which controls the animation speed inside the same state.

In the following code snippet, I have created two different Granim instances to draw different gradients. In the first case, we have only specified a single gradient, so there is not any actual animation and the colors don't change at all.

var firstGranim = new Granim({ element: "#first", name: "first-gradient", direction: "diagonal", opacity: [1, 1], states: { "default-state": { gradients: [["#8BC34A", "#FF9800"]] } } }); var secondGranim = new Granim({ element: "#second", name: "second-gradient", elToSetClassOn: ".wrapper", direction: "top-bottom", opacity: [1, 1], states: { "default-state": { gradients: [["#9C27B0", "#E91E63"], ["#009688", "#8BC34A"]], transitionSpeed: 2000 } } });

Animate Gradients Over an Image

Another common use of the Granim.js library would be to animate a gradient over an image drawn on the canvas. You can specify different properties to control how the image is drawn on the canvas using the image property. It accepts an object with key-value pairs as its value. You can use the source property to specify the path from which the library should get the image to draw it on the canvas.

Any image that you draw on the canvas will be drawn so that its center coincides with the center of the canvas. However, you can use the position property to specify a different position to draw the image. This property accepts an array of two elements as its value. The first element can have the values left, center, and right. The second element can have the values top, center, and bottom. 

These properties are generally useful when you know that the size of the canvas and the image won't match. In these situations, you can use this property to specify the part of the image that should appear on the canvas.

If the images and the canvas have different dimensions, you can also stretch the image so that it fits properly inside the canvas. The stretchMode property also accepts an array of two elements as its value. Three valid values for both these elements are stretch, stretch-if-smaller, and stretch-if-larger.

A gradient with blend mode set to normal will completely hide the image underneath it. The only way to show an image below a gradient of solid colors would be to choose a different blend mode. You can read about all the possible blend mode values for a canvas on MDN.

I would like to point out that the ability to animate a gradient over an image was only added in version 1.1 of the Granim.js library. So you will have to use any version higher than that if you want this feature to work properly.

var firstGranim = new Granim({ element: "#first", name: "first-gradient", direction: "diagonal", opacity: [1, 1], image: { source: "path/to/rose_image.jpg", position: ["center", "center"], blendingMode: "lighten" }, states: { "default-state": { gradients: [["#8BC34A", "#FF9800"], ["#FF0000", "#000000"]] } } }); var secondGranim = new Granim({ element: "#second", name: "second-gradient", elToSetClassOn: ".wrapper", direction: "top-bottom", opacity: [1, 1], image: { source: "path/to/landscape.jpg", stretchMode: ["stretch", "stretch"], blendingMode: "overlay" }, states: { "default-state": { gradients: [["#9C27B0", "#E91E63"], ["#009688", "#8BC34A"]], transitionSpeed: 2000 } } });

Methods to Control Gradient Animation Playback

Up to this point, we did not have any control over the playback of the gradient animation once it was instantiated. We could not pause/play it or change its state, direction, etc. The Granim.js library has different methods which let you accomplish all these tasks with ease.

You can play or pause any animation using the play() and pause() methods. Similarly, you can change the state of the gradient animation using the changeState('state-name') method. The state-name here has to be one of the state names that you defined when instantiating the Granim instance.

More methods were added in version 1.1 which allow you to change the direction and blend mode of an animation on the fly using the changeDirection('direction-name') and changeBlendingMode('blending-mode-name') methods.

In the following code snippet, I am using a button click event to call all these methods, but you can use any other event to call them.

var firstGranim = new Granim({ element: "#first", name: "first-gradient", elToSetClassOn: ".wrapper", direction: "top-bottom", opacity: [1, 1], isPausedWhenNotInView: true, image : { source: 'path/to/landscape.jpg', stretchMode: ["stretch", "stretch"], blendingMode: 'overlay' }, states: { "default-state": { gradients: [["#9C27B0", "#E91E63"], ["#009688", "#8BC34A"]], transitionSpeed: 2000 }, "green-state": { gradients: [["#4CAF50", "#CDDC39"], ["#FFEB3B", "#8BC34A"]], transitionSpeed: 2000 }, "red-state": { gradients: [["#E91E63", "#FF5722"], ["#F44336", "#FF9800"]], transitionSpeed: 2000 } } }); $(".play").on("click", function(){ firstGranim.play(); }); $(".pause").on("click", function(){ firstGranim.pause(); }); $(".diagonal").on("click", function(){ firstGranim.changeDirection('diagonal'); }); $(".radial").on("click", function(){ firstGranim.changeDirection('radial'); }); $(".red-state").on("click", function(){ firstGranim.changeState('red-state'); }); $(".green-state").on("click", function(){ firstGranim.changeState('green-state'); }); $(".color-dodge").on("click", function(){ firstGranim.changeBlendingMode('color-dodge'); }); $(".color-burn").on("click", function(){ firstGranim.changeBlendingMode('color-burn'); }); $(".lighten").on("click", function(){ firstGranim.changeBlendingMode('lighten'); }); $(".darken").on("click", function(){ firstGranim.changeBlendingMode('darken'); });

Final Thoughts

In this tutorial, I have covered the basics of the Granim.js library so that you can get started with it as quickly as possible. There are a few other methods and properties that you might find useful when creating these gradient animations. You should read the official documentation in order to read about them all.

If you’re looking for additional JavaScript resources to study or to use in your work, check out what we have available in the Envato Market.

If you have any questions related to this tutorial, feel free to let me know in the comments.

Categories: Web Design

How Mobile Web Design Affects Local Search (And What To Do About It)

Smashing Magazine - Thu, 06/28/2018 - 05:30
How Mobile Web Design Affects Local Search (And What To Do About It) How Mobile Web Design Affects Local Search (And What To Do About It) Suzanna Scacca 2018-06-28T14:30:06+02:00 2018-07-11T12:36:25+00:00

As mobile-first takes center stage in the majority of articles I write these days, I’ve had a number of designers and developers question why that is. Sure, Google has made a big push for it, so it’s smart to do what Google tells you. But, for some websites, the majority of traffic doesn’t come from mobile users.

At the moment, there are certain websites that happen to receive more mobile traffic than others, and a lot of it boils down to location. As Google explains it:

“Looking for something nearby—a coffee shop, noodle restaurant, shoe store—is one of the most common searches we do. In fact, nearly one-third of all mobile searches are related to location.”

Logically, it makes sense. If a user has access to a desktop or laptop at home or work, they’re more likely to use it to initiate a search. Whether they’re multitasking (like while coordinating dinner with a friend through Skype), walking around a city, or decide to order dinner in but don’t want to move from the couch, the mobile device is a quick way to get that information.

In this article, I’m going to focus explicitly on these kinds of consumers and the websites that appeal to them. In other words, if you design websites for businesses with a local presence, keep reading to learn how to use mobile web design to improve their local search ranking.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Seven Mobile Web Design Strategies To Use For Local Search

In last year’s Local Consumer Review survey, Bright Local revealed that 97% of consumers had used the Internet to search for local businesses at some point in 2017. For some users, the Internet was frequently used as a resource, with 12% looking for new businesses every day and 29% doing so at least once a week.

A breakdown of how frequently people search for local businesses online. (Image source) (Large preview)

A report by hitwise shows that the majority of online searches begin on mobile:

Industries whose users most commonly begin their searches for on mobile. (Image source) (Large preview)

Notice the trend in business types whose users most often begin their searches on mobile (i.e. they’re mostly local businesses).

Further, it appears that these kinds of searches are done for the purposes of research at the start of the buyer’s journey. If web designers and developers can get into the minds of their target users and the kinds of questions they might ask or features they might seek out, they can more effectively build a relevant mobile experience through their sites.

For those of you who specialize in building websites for clients with a local user base, you should utilize mobile design strategies that improve local search results. While some of your efforts outside the website will help with this (like creating a Google My Business page and responding to reviews on Yelp), there’s a lot that can be done with your design to greatly contribute to this as well.

Strategy 1: “Design” Your Metadata For Mobile

Copywriters and web developers are already aware of what a critical role metadata plays in a website’s search marketing efforts. In just a few succinct strings of text, you can tell search engines and your audience a lot about your website and each of its web pages. This is particularly helpful in local search as users look for results that answer the “[fill in the blank] near me” question.

But that’s not the strategy I’m talking about here. Instead, I want to focus on how you can “design” your metadata so that it’s more attractive to mobile users once your website actually appears in their local search results.

There are a couple ways to do this:

The first is to craft succinct metadata strings for each web page. Let’s take the Liquid Surf Shop website, for instance:

Refer to the first search result for Liquid Surf Shop. Notice how succinctly it’s written. (Image source) (Large preview)

The first search result looks nice, doesn’t it? The web page name and URL each fit on one line. The description accurately describes what the shop does (and points out where it’s located!) while also fitting within the allotted space for mobile search descriptions.

Now, take a closer look at the Liquid Surf Shop when it’s compared against direct competitors in mobile search:

Liquid Surf Shop’s metadata is well-written and to the point. (Image source) (Large preview)

If you look at the entries for East of Maui and Dewey Beach Surf Shop above, notice how their descriptions end with an incomplete sentence. Then, look at the Bethany Surf Shop below it. The meta title is way too long for the space given. This lack of attention to metadata might cost these websites visitors when positioned around a well-written listing like the one at the Liquid Surf Shop.

Another thing you can do to improve local search listing appearance (as well as how high it ranks on the page) is to use schema markup in your design’s code.

Schema.org has created a robust set of structured data that businesses can use to improve search engine comprehension and, consequently, results. Local businesses, in particular, would find schema markup especially helpful as it allows them to “tag” various elements consumers tend to use in the decision-making process.

Here’s an example of schema markup done well for a local business: Henlopen City Oyster House:

Schema markup found for Henlopen City Oyster House home page. (Image source) (Large preview)

As you can see, the developer has marked up the home page with various structured data. Specifically, they have associated it with three “types”: Local Business, Restaurant, and Service. Each of those schema types have been drilled down even further into details about the location, contacting the restaurant, cuisine type, and so on. This is great for connecting mobile users to the kind of local business they seek out.

Strategy 2: Shorten The Website

With some mobile websites, it may be okay to ask users to scroll four or five times before they reach the end of the page. Or to go two or three pages deep to get to a desired endpoint.

That said, that type of extended on-site experience probably isn’t ideal for local mobile users. While Google does pay attention to factors like time-on-site and pages visited, what you need to be more concerned with is high bounce rates and lack of engagements or conversions.

In order to create this ideal situation for users while still appeasing the search gods, your focus when designing a website and its navigation is to keep it short and to the point.

I’m going to use the Bad Hair Day website for this example:

This is the first thing you see upon entering the Bad Hair Day website. (Image source) (Large preview)

The header of the website contains all the information someone might realistically need if they want to contact the hair salon and spa. The address is there along with a phone number (which does have a click-to-call function) and social media icons.

Other types of websites would do well to put business-specific information and calls-to-action here as well. For example:

  • Location search
  • Hours of operation
  • Make an appointment or reservation
  • View a menu (for food)

…and so on.

The simplified navigation menu for the Bad Hair Day website. (Image source) (Large preview)

Scroll just a little bit down the website and you can open the hamburger menu. As you can see, this navigation is simply structured and keeps all the essentials on the top level for easy discovery.

One more scroll on the Bad Hair Day site takes you to this informational section. (Image source) (Large preview)

The home page of this mobile website only requires three full swipes before you get to the end of it, which is a truly nice touch. Rather than create an overly elaborate home page with summary sections of each page that force users to scroll and scroll, Bad Hair Day keeps it simple.

By offering such a user-friendly layout and structure, Bad Hair Day has created a truly awesome first impression. In addition, by keeping things simple, the website isn’t burdened by excessive amounts of images, animations, scripts, and so on. Because of this, the mobile site loads quickly.

Strategy 3: Localize Visual Content

If your sites are mostly comprised of large swatches of color and stock photography, this one won’t apply. However, if the designs you create include custom-made photos and videos, there’s a unique opportunity to use this visual content to rank in local search.

If it makes sense, include photos that strongly resonate with local residents. Recognizable images of the local landscape or cityscape will give visitors a reason to feel a stronger connection to the business. It’s kind of like bonding over a local sports team during a consultation call or first meeting. Only, you’re able to make this connection with them through your choice of imagery.

But that’s just how you appeal to visitors’ local ties on the website. How about in search?

For this, use alt text on images and videos. This is typically recommended for the purposes of accessibility (i.e. helping impaired visitors consume your content even if they can’t see or hear it). However, alt text is also readable by Google bots. If you use the right kinds of location-driven keywords in your image’s alternative text, that visual content can rank higher in local image searches. Just keep in mind that you don’t want to sacrifice accessibility for local SEO. Make your alt text descriptive while finding ways to infuse local keywords into it.

One of the local business types I think this is particularly useful for is a real estate agency. Like Jack Lingo Realty. Here is a listing Jack Lingo posted on its website for a home in Rehoboth Beach:

Real estate listing on the Jack Lingo website. (Image source) (Large preview)

The top of the page includes a series of beautiful images taken of the house located at 17 West Side Drive, Rehoboth Beach, Delaware.

Now, open up the page source and look at what the first image’s alt text says:

Example of location-specific alt text used on the Jack Lingo website. (Image source) (Large preview)

The alt text includes a unique identifier at the start of it (probably to distinguish it from the other images in the gallery), but is then followed by the address of the property. For prospective homeowners who do their research via Google for properties in that particular neighborhood and community, well, guess what they find when they do a Google image search for it?

Jack Lingo takes the top spots in local image search. (Image source) (Large preview)

Jack Lingo’s property images take the top spots. Pretty impressive, right? So, the next time you design a website for a client whose business depends on showing off a product or property, think about how you can optimize it for local image results.

Strategy 4: Add Reviews And Ratings When Possible

I always like to refer to the aggregation of reviews and ratings on your own website as a way to control the conversation about your brand. It makes sense, right? When customers are left without a podium to speak from, they’re going to make their own… on Yelp, Google, Facebook, TripAdvisor, and wherever they feel like it. While there’s no escaping this entirely, offering a space for reviews and ratings on your website can help control the flow of feedback.

It also can improve the look of a local search result.

The example I’m going to use for this is the Fairfield Inn & Suites Rehoboth Beach:

The top of the Fairfield Inn & Suites page displays the average user rating as well as number of reviews. (Image source) (Large preview)

As you can imagine, a major hotel property owned by Marriott will already receive a lot of reviews from the web.

An expansion of how users rate the Marriott property, on average. (Image source) (Large preview)

However, by adding reviews and ratings to its own website, Marriott is accomplishing a few things that will help it with local search users. For starters, there’s the transparency factor. Marriott has actively solicited customers for feedback on their hotel stay and published those reviews for all to see. Local users are very fond of online reviews, with 73% claiming that positive reviews increase their trust in a local business.

The Fairfield Inn & Suites listing includes an eye-catching rating. (Image source) (Large preview)

In addition, Marriott’s inclusion of a rating system on its website proves beneficial within local search results, too.

As you can see in the list of results for “Rehoboth beach de lodging”, Marriott is the only one that includes a rating—and an impressive one at that. If mobile users are quickly scrolling through search results for the most relevant and attractive business for their needs, a positive review might be enough to stop them dead in their tracks.

Strategy 5: Build Dedicated Location Pages

When designing websites with multiple locations, be sure to create a dedicated page for each location. There are on-site benefits to think about as well as search-related ones.

For starters, individual location pages reduce the amount of work visitors have to do once they get on the site. You’ve likely seen those “Location” pages before that are cluttered with a dozen or so locations, each with information related to address, phone number, email, website and so on. By giving each location a separate page, however, you don’t have to worry about compromising readability or focus.

The Tanger Outlets website demonstrates this point well as you can see that, in just a few clicks, visitors can quickly learn more about their personal location without the clutter or distraction of all the others.

he Tanger Outlets navigation includes a page dedicated to Locations. (Image source) (Large preview)

The Tanger Outlets navigation menu puts the “Locations” page right at the very top. It’s likely the first thing visitors search for as they aim to learn more about their local outlet mall and its offering of shops and brands.

The “Location” page on the Tanger Outlets site includes an interactive map. Image source) (Large preview)

The “Location” page for the Tanger Outlets website then displays an interactive map. Users can drag the map around and try to find their location on their own or they can enter details below in the short form.

Example of a location-specific page and details from Tanger Outlets. Image source) (Large preview)

Upon finding their location, users then receive a high-level overview of the location, phone number, and hours of operation for the Tanger Outlets near them. There are additional pages they can visit to learn more about stores and deals at that particular mall.

By creating dedicated location pages on your website, you’re also giving it an extra chance to rank within local search results.

Strategy 6: Place Your CTA Front And Center

As you might have noticed, there are common themes running through these strategies: simplicity and straightforwardness. The more quickly you can deliver information to your visitors through smart design techniques, the greater the likelihood they will engage and/or convert.

As it pertains to these key checkpoints, you obviously know what to do about designing call-to-action buttons for mobile: make them big, colorful, clickable, and in the thumb zone. But what about placement? Some argue that a call-to-action should always be placed in the most logical of locations. In many cases, that’s directly after a descriptive section of text that “sells” visitors on the reason for clicking through.

On mobile, you don’t really have time to waste. And if they’re doing a search explicitly looking for a local business that does X, Y or Z, then it would be beneficial to put your CTA front and center.

The Atlantic Oceanside is an extreme example of how to do this, but it’s one I believe is done well all the same:

The top of the Atlantic Oceanside page displays a “Book Now” button. (Image source) (Large preview)

The very top of the Atlantic Oceanside website is a prominent “Book Now” button. Granted, some users might not be ready to pull the trigger on a hotel reservation the second they enter the site, but it’s still a good idea to have the button there. It’s a reminder that the booking process is going to be as painless as possible.

The “Book Now” button appears a number of times throughout the Atlantic Oceanside website. (Image source) (Large preview)

For visitors who aren’t ready to book right away, the website includes the same CTA throughout the rest of the site. It’s consistently designed and worded so that visitors always know where to find it.

The Atlantic Oceanside site also includes the CTA in the navigation. (Image source) (Large preview)

There’s another instance of the CTA that I think is placed quite well and that’s the one that exists in the navigation. You can see that all the important details about a guest’s stay are presented first, but then “Book Now” and the business’s phone number are at the bottom of the list so users don’t have to dig through pages to find that information.

If you want to make conversions easier for mobile users, don’t bury your CTAs.

Strategy 7: Include Geotargeting Features

The last strategy I’m recommending is less about design and more about features you can apply to your site that give visitors a personalized experience.

Geotargeting and geolocation services (like beacon technology) were really hot topics a few years ago. Think back to when Pokémon Go was all anyone could talk about. Mobile users were willingly giving apps their location data in return for what they considered to be a valuable experience. I believe you should be doing the same when designing mobile websites for local search users.

With geotargeting features, you have the opportunity to enhance visitors’ experience in a way that a global-serving website can’t.

WSFS Bank is an example of a business that makes good use of this feature. First, it asks for permission to use the current location as determined by the user’s mobile device:

WSFS Bank politely asks visitors for access to geolocation data. (Image source) (Large preview)

Upon granting access to the mobile website, the user is then presented with information at the top regarding the closest WSFS Bank location:

A sticky top bar is now presented to the mobile user on the WSFS Bank website. (Image source) (Large preview)

There are other use cases for geotargeting that your visitors might find useful as well. For instance, you could offer targeted discounts, include in-store availability checks, and convert prices to their local currency (if not the same as your own). Ultimately, your access to their location should be used to improve their experience and compel them to convert online or visit the brick-and-mortar location.

Wrapping Up

Designing for mobile-first isn’t all too tricky these days now that we’ve had time to adjust to it. That said, designing mobile websites for local search users is a different story. It’s not that they don’t appreciate a responsive design or shorter contact forms like everyone else. It’s just that their motivation and goals aren’t always the same as everyone else’s.

So, in addition to designing websites for mobile-first indexing, pay attention to how the design affects the website’s appearance in search results as well as how it’s received by local search users.

(lf, ra, il)
Categories: Web Design

How we fought webspam - Webspam Report 2017

Google Webmaster Central Blog - Thu, 06/28/2018 - 05:16

We always want to make sure that when you use Google Search to find information, you get the highest quality results. But, we are aware of many bad actors who are trying to manipulate search ranking and profit from it, which is at odds with our core mission: to organize the world's information and make it universally accessible and useful. Over the years, we've devoted a huge effort toward combating abuse and spam on Search. Here's a look at how we fought abuse in 2017.

We call these various types of abuse that violate the webmaster guidelines “spam.” Our evaluation indicated that for many years, less than 1 percent of search results users visited are spammy. In the last couple of years, we’ve managed to further reduce this by half.

Google webspam trends and how we fought webspam in 2017

As we continued to improve, spammers also evolved. One of the trends in 2017 was an increase in website hacking—both for spamming search ranking and for spreading malware. Hacked websites are serious threats to users because hackers can take complete control of a site, deface homepages, erase relevant content, or insert malware and harmful code. They may also record keystrokes, stealing login credentials for online banking or financial transactions. In 2017 we focused on reducing this threat, and were able to detect and remove from search results more than 80 percent of these sites. But hacking is not just a spam problem for search users—it affects the owners of websites as well. To help website owners keep their websites safe, we created a hands-on resource to help webmasters strengthen their websites’ security and revamped our help resources to help webmasters recover from a hacked website. The guides are available in 19 languages.

We’re also recognizing the importance of robust content management systems (CMSs). A large percentage of websites are run on one of several popular CMSs, and subsequently spammers exploited them by finding ways to abuse their provisions for user-generated content, such as posting spam content in comment sections or forums. We’re working closely with many of the providers of popular content management systems like WordPress and Joomla to help them also fight spammers that abuse their forums, comment sections and websites.

Another abuse vector is the manipulation of links, which is one of the foundation ranking signals for Search. In 2017 we doubled down our effort in removing unnatural links via ranking improvements and scalable manual actions. We have observed a year-over-year reduction of spam links by almost half.

Working with users and webmasters for a better web

We’re here to listen: Our automated systems are constantly working to detect and block spam. Still, we always welcome hearing from you when something seems … phishy. Last year, we were able to take action on nearly 90,000 user reports of search spam.

Reporting spam, malware and other issues you find helps us protect the site owner and other searchers from this abuse. You can file a spam report, a phishing report or a malware report. We very much appreciate these reports—a big THANK YOU to all of you who submitted them.

We also actively work with webmasters to maintain the health of the web ecosystem. Last year, we sent 45 million messages to registered website owners via Search Console letting them know about issues we identified with their websites. More than 6 million of these messages are related to manual actions, providing transparency to webmasters so they understand why their sites got manual actions and how to resolve the issue.

Last year, we released a beta version of a new Search Console to a limited number of users and afterwards, to all users of Search Console. We listened to what matters most to the users, and started with popular functionalities such as Search performance, Index Coverage and others. These can help webmasters optimize their websites' Google Search presence more easily.

Through enhanced Safe Browsing protections, we continue to protect more users from bad actors online. In the last year, we have made significant improvements to our safe browsing protection, such as broadening our protection of macOS devices, enabling predictive phishing protection in Chrome, cracked down on mobile unwanted software, and launched significant improvements to our ability to protect users from deceptive Chrome extension installation.

We have a multitude of channels to engage directly with webmasters. We have dedicated team members who meet with webmasters regularly both online and in-person. We conducted more than 250 online office hours, online events and offline events around the world in more than 60 cities to audiences totaling over 220,000 website owners, webmasters and digital marketers. In addition, our official support forum has answered a high volume of questions in many languages. Last year, the forum had 63,000 threads generating over 280,000 contributing posts by 100+ Top Contributors globally. For more details, see this post. Apart from the forums, blogs and the SEO starter guide, the Google Webmaster YouTube channel is another channel to find more tips and insights. We launched a new SEO snippets video series to help with short and to-the-point answers to specific questions. Be sure to subscribe to the channel!

Despite all these improvements, we know we’re not yet done. We’re relentless in our pursue of an abuse-free user experience, and will keep improving our collaboration with the ecosystem to make it happen.

Posted by Cody Kwok, Principal Engineer
Categories: Web Design

Everything You Need To Know About Transactional Email But Didn’t Know To Ask

Smashing Magazine - Wed, 06/27/2018 - 03:30
Everything You Need To Know About Transactional Email But Didn’t Know To Ask Everything You Need To Know About Transactional Email But Didn’t Know To Ask Garrett Dimon 2018-06-27T12:30:49+02:00 2018-07-11T12:36:25+00:00

Any application with user-authentication can’t exist without email, yet, email doesn’t always get the attention it deserves. With modern email service providers, it’s easier than ever to create a first-class transactional email experience for your users, but, for most of us, the challenge lies in the fact that you don’t know what you don’t know. We’re going to dive into an end-to-end analysis of everything you need to bring your transactional email up-to-snuff with the rest of your web application.

We’ll address the difference between transactional and bulk emails and how and why to use email authentication. We’ll also talk about handling delivery edge cases gracefully, crafting great email content, and the key pieces of infrastructure you’ll want in place for sending email and monitoring delivery. Then you’ll be well on your way to being a transactional email pro in no time.

The Challenges Of Transactional Email

To some degree, email has traditionally been a second-class citizen because it’s more difficult to monitor and understand how well you’re doing. With your application, there are countless performance monitoring tools to provide insights into front-end, back-end, database, errors, and much more. With email, the tools are less well-known and a little more difficult to use effectively. So let’s explore some of the challenges facing email monitoring and reporting, and then we can look at the available tools and tactics that can work within the challenges and constraints to give you a more informed view of your transactional email.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

The biggest underlying challenge with monitoring email is that it’s literally impossible to log in to every recipient’s inbox and check to see if they received the email. So right out off the gate, the best insights we can hope for are simply proxies or estimations of performance. The second biggest challenge is that every ISP plays by their own rules. What might get classified as spam by Outlook could go straight to the inbox in Gmail. And inbox providers can’t share their “secret sauce” because it would immediately be exploited by spammers. So what’s a developer to do?

Open rates can give you a rough approximation, but since they rely on tracking pixels, which can easily be blocked, it’s an incomplete picture. Inbox rates and delivery speeds can’t be directly measured either. So you’d have to settle for sending regular tests to seed accounts that you have the ability to test. These aren’t perfect, but it’s the best available proxy for understanding delivery to the various inbox providers. We’ll address tools help automate this later in the guide.

Adding domain authentication in the form of DKIM, SPF, and DMARC can be difficult and confusing, or, depending on the size of your company, getting access or approval for DNS changes can be cumbersome or impossible. Even then, it’s incredibly easy to get the DNS entries incorrect. If you’re not familiar with domain authentication, don’t worry, we’ll address it in-depth later.

Of course, even if you’re generally able to achieve great delivery, bounce handling introduces more variability to delivery. Recipient’s inboxes may be full. People change jobs, and email addresses become inactive. People make typos with email addresses. People may sign up with a group alias, and then one of the addresses in that group bounces. Temporary server or DNS outages can affect delivery for everybody on a given domain. And then there are spam complaints.

So right out of the gate, the deck is stacked against you. There are plentiful edge cases, and it’s incredibly difficult to get an accurate picture of delivery. Ongoing monitoring is complex, and there’s a lot of room to make mistakes. It paints a gloomy picture, I know. Fortunately, email has come a long way, and while it’s not trivial, there are good solutions to all of these problems.

Transactional vs. Bulk Promotional

Before we go further, we need to address the significant differences between bulk promotional email and your application’s transactional email. With the former, if an email is lost or delayed, nobody is going to miss it. With the latter, however, a missing or significantly delayed password reset can lead to additional support requests. Your transactional emails are as critical as a page in your application. You can think of a missing or delayed email as being roughly equivalent to a broken page in your web application. Email is a different medium, but it is still a central piece of the experience of using your application.

Because people expect and want to receive transactional emails, they see higher engagement in terms of open and click rates than your bulk promotional email. Similarly, transactional emails will be reported as spam much less frequently than bulk emails. And all of that leads to a better reputation for your transactional email than the bulk promotional emails. In some cases, that could be the difference between the inbox and spam folder. Or, it may just be a matter of which tab Gmail puts the email in. Regardless, the differences between transactional and bulk are stark enough that even Gmail officially recommends separating the streams. That way, your bulk reputation won’t drag down your transactional reputation.

This brings us to our first tip:

1. Separate your transactional and bulk sending streams using different domains or subdomains

In a perfect world, you’d send transactional through your primary domain, and relegate bulk to a subdomain like something@marketing.example.com and each category would have its own IP addresses as well.

Separating your streams is the first step and critical to lay the ground work for the best possible email experience for your recipients. While you can’t guarantee delivery to the inbox, you can do a few things to stack the deck in your favor. Authentication is the next step to doing precisely that. Just like you wouldn’t launch a modern web application without a secure certificate, you don’t want to send email without fully authenticating it.

Email Authentication

You may have heard acronyms like DKIM, SPF, or DMARC, and you may have even copied and pasted some DNS entries to set these up. Or you may have skipped it because it felt a little too complex. Either way, these are all standards worth implementing, and they all complement each other and work together to build and protect your reputation. The exact approach to these will vary from provider to provider, but it’s always worth implementing.

Let’s start with DKIM. Without getting too much into the technical details, DKIM does two things. First, it acts as a sort of virtual wax seal on your emails to show that they haven’t been modified in transit. Second, it enables you to build domain reputation. While DKIM focuses on the domain, SPF focuses on providing a list of approved IP addresses for sending so that receiving mail servers have a better idea of whether an email is being sent from a legitimate source.

One significant benefit of DKIM is that it’s the key to avoiding “via” labels in Gmail or “on behalf of” labels in Outlook. These elements make your emails look a little more likely to be spam and can undermine the trust of your recipients. So DKIM is much more than a behind-the-scenes standard. It’s something that can directly affect the experience of your recipients.

That all brings us to the next foundational tip:

2. Authenticate emails with DKIM and SPF

While authentication can’t guarantee delivery, it’s a key facet of building a reputation and doing everything you can to ensure great delivery.

DMARC is designed to help protect against phishing attacks. It incorporates both DKIM and SPF to help you monitor sending for your domain and protect your domain reputation by enabling you to publish a DMARC policy. That policy tells inbox providers what to do when an email fails DMARC alignment.

Before DMARC, it was entirely up to inbox providers to choose how to handle emails that weren’t authenticated with DKIM and/or SPF, but with DMARC, you’re able to create a public policy that tells providers to quarantine (send to the spam folder) or reject (outright discard) emails that fail DMARC alignment.

The other benefit of DMARC is that it enables ISPs to deliver reports to you about the sources of email sent using your domain and the quantities that passed for failed alignment via DKIM or return-path. This can enable you to track down legitimate sources that are failing alignment and take action to ensure those sources are authenticated. It can also help you quantify the amount of illegitimate email that’s attempting to be sent using your domain.

PayPal is the quintessential example of the importance of a good DMARC policy. Over the years, countless scammers tried to spoof PayPal emails, but now, with DMARC, PayPal has a published DMARC policy telling ISP’s to reject emails that don’t pass DMARC. So if any scammers try to spoof a PayPal email, they’ll fail DMARC alignment, and the ISPs can be confident in fully rejecting those emails because PayPal has a public policy saying that if an email fails alignment, it should be rejected.

That’s a very brief overview of DMARC, but hopefully it helps provide the context for our third tip:

3. Establish and publish a DMARC policy

Also, if possible, set up a custom return-path to maximize your chance of alignment. Then, monitor your DMARC reports and make adjustments to ensure alignment for any legitimate sources of email. Finally, if your product or brand is the target of a high quantity of phishing attacks, begin phasing in a progressively more aggressive quarantine or reject policy over time.

Postmark’s DMARC tool is a free and easy way to establish a DMARC policy and begin receiving weekly reports about your domain. With your bulk and transactional email streams separated, and all of the afore-mentioned authentication set up, you’ve handled all of the foundational aspects of delivery. From here on out, we’ll focus on the email handling and treatment within your application.

Understand The Email Lifecycle

On the surface, email can sound pretty simple, but when you break down the email lifecycle, there’s a lot of subtlety and opportunity below the surface. The better you understand that, the more you’re able to provide a more nuanced and rich experience for recipients. Most of your opportunities to improve the email experience depending on understanding the nuances of email delivery and automating your application’s ability to process and handle them accordingly. So let’s look at the key events in the life of any email.


Once your application has assembled an email from the various bits of content, you’ll queue it up for delivery. Within your application, you’ll want to ensure that you’re sending email via background processing. We’ll discuss this in depth later, but the simple version is that any time your application communicates with a 3rd party service, you’ll want to handle that communication in the background. Assuming you’re using an email service provider, once you’ve made the request to their API, it will be queued for sending on their end as well.


Like any service, your email service provider will have their own queue for processing and sending your email. In most cases, these queues are extremely fast. Whereas sending a bulk email to thousands of recipients can take seconds or minutes, most transactional email will be sent much faster.


After the email is sent by your email provider, it will ideally be accepted by the inbox provider. However, “Accepted” does not mean “Delivered.” Think of it like the postal service. Just because it has your letter, it still has to process it before it is considered delivered. Also, some inbox providers will accept an email but not ultimately deliver it for a variety of reasons. So even when an email has been accepted, there’s no guarantee that it will eventually be delivered.


While some inbox providers will quietly reject emails, in most cases, when emails are rejected, it’s done explicitly, and you’ll receive an explanation of the problem with the email. In some cases, it may be IP or domain reputation, or it may be the content of the email. Unfortunately, you won’t always get a clear explanation of the reason for rejection.


Bounces are a more specific type of rejection. In cases where an email address doesn’t exist, the inbox is full, or some other reasons, mail services will report back that delivery failed and the email bounced. In these cases, you can use your ESP’s bounce handling notifications to proactively take steps to correct the problem. We’ll discuss that in more detail later.


Delivered is the state where the message has been given to the recipient. It may have been delivered to the inbox, spam folder, or one of Gmail’s tabs, but it has been delivered to some degree. You won’t ever receive an explicit notification that an email has been delivered, but it’s a key state in the lifecycle.


Open tracking isn’t entirely reliable because the method used to determine when an email has been opened can be blocked by the email client. Since open tracking relies on the email client to load an invisible image, clients that block image loading mean that those opens won’t be reported. Open rates can serve as a good proxy for delivery. For instance, if you switch email service providers without changing anything about your emails, and your open rates jump significantly, it’s safe to assume that your first email service provider was failing to deliver some portion of messages.

Click tracking is more reliable than open tracking, but it can bring complexities of its own. For instance, using Bit.ly or other URL shortening services is a common tactic used by spammers, so in most cases, the presence of a Bit.ly URL will put your email on the fast track to the spam folder. However, when done well through your email service provider’s click tracking, it can provide useful insights for your emails. Also, even if open tracking is blocked by a client, if someone clicks on an email, it’s safe to assume that the email was opened. So click tracking can help provide more accurate insights on open rates as well.

With open and click tracking, it’s important to give some consideration to privacy. While they can be powerful tools for enriching your recipient’s experience and providing you insights that can be used to improve your emails, they also touch on privacy issues. If you’re not going do anything with the data they provide, you’re better off not using them. Or, if you’re in an industry where privacy is highly sensitive, you’d probably want to think twice before enabling them.


While unsubscribes are less relevant for transactional emails, it’s still a request that should be respected. While you may not be legally obligated to support unsubscribing from transactional emails, it’s a status you may encounter, and when you do, you should respect it.

Spam Complaint

Like unsubscribes, spam complaints are less frequent with transactional email, but they still happen. If your spam complaint rate is high, it’s a good sign that you need to adjust the quantity and/or quality of the transactional emails you’re sending. Like bounce handling, you’ll want to be proactive about spam complaints as well. You’ll want to respect them, but it’s important to remember that some spam complaints are accidental. If someone reports an email as spam, it could affect them receiving future bills or invoices.

This brings us to our fourth tip:

4. Closely integrate message events into your application

Most email service providers offer extensive web hooks to automatically notify your application about key events with each message. While bounce handling is the most critical event to track and handle, the other events can provide useful information to enrich your application and make transactional email a more seamlessly integrated element of your user experience.

Take Care With Email Content

The content of your emails can play a role in both delivery and engagement. While some rules (such as avoiding the word “Viagra”) may be obvious, others are more subtle. Taking care to craft good content can drastically improve open rates or engagement.

We’ll group several considerations into our fifth tip for great transactional emails:

5. Take time to craft the content and structure of your emails

Things like the sender name and email address, the subject, preheaders, and mime types can have a meaningful impact on engagement, delivery, and open rates. Don’t let these elements be afterthoughts. Make time to get them right and continuously test and improve them as if they were any other page in your application.

Senders, Subjects, And Preheaders

While every email client is different, they all provide some level of insight about an email before it’s opened through some sort of preview. That can be as simple as displaying the sender and the subject, but it also sometimes contains a preview of the content. This topic could justify an article unto itself, but suffice it to say that it’s worthy of spending some time viewing your emails the way your recipients will. This includes clearly naming the sender, writing a useful and concise subject line, and crafting the perfect preheader. Don’t let these elements be an afterthought because they can have a significant impact on your open rate.

HTML And Plain Text

The actual content of your emails is important, and including both HTML and plain text versions of your emails can have a huge impact on your recipients. Some people prefer plain text. Whether for performance, privacy, or accessibility, providing a well-formatted and considered plain text option is a win for that recipient. And some spam filters prefer to see a plain text version paired with an HTML version. Litmus has a great writeup on the importance of plain text options in emails for more details.

Accept Replies And Avoid “No-Reply” Addresses

Do what you can to avoid using “no-reply” email addresses. They send the wrong signal in every way possible. As a result, you’ll receive more spam complaints since people can’t reply to unsubscribe. It’s one-way communication and reduces engagement that could otherwise improve deliverability.

Ideally, the from address or reply-to address would send replies to a monitored support inbox. This provides the best experience for recipients and ensures that replies don’t get lost in the mix. However, there’s one main consideration to keep in mind. If a user receives a password reset URL and replies, anybody with access to the reply will also have access to that password reset URL. The same goes for any particularly sensitive information, but, all things considered, you don’t want to send highly sensitive information in email anyways.

Another option is to use inbound receiving email addresses. In the case of things like comment notifications where it might be useful for the recipient to respond directly to the email, setting up inbound email processing can give you the ability to avoid no-reply email addresses.

Regardless of the method, you should always do your best to avoid no-reply addresses. Your customers will appreciate it, and you’ll be much more likely to receive important feedback if you’re listening on all channels.

Handle Email With Care

The content of your email is important, but how you deliver it and respond to edge cases with delivery can be just as important. There’s a lot that happens (or can happen) with each and every email you send. While most conversations with email focus simply on sending it, how you send it can be just as important.

We’ll group this batch into the sixth high-level tip for great transactional email delivery:

6. Invest in the infrastructure to send and deliver emails reliably

Sending emails sounds simple, but there’s a lot more to it than writing a few lines of code. It’s important to build and maintain the proper infrastructure like background processing and bounce handling to ensure the highest possible reliability for your email.

Background Processing

Assuming you’re using an email service provider, you’ll want to ensure that all of your email sending happens via background processes. This is the case with any communication with any external services, and there are a few reasons. First and foremost, with an external service, there’s always the possibility that it’s down. So if the request fails, it’s important to be able to retry the request automatically after a set period of time. Alternatively, there could be a problem with the request, or something may have changed on the part of the external service. Regardless of the reason, designing resiliency into your email sending will save you some grief at some point.

Similarly, a good background processing setup can make it easier to be alerted to and troubleshoot issues when something does go wrong. If you set up alerts when a queue runs high, you’ll know more quickly when there’s a problem. Also, assuming your background processing is capturing and logging errors, you’ll have a much easier time identifying the source of the issue.

Understand Dedicated IPs

If you’ve looked into transactional email to any degree, you’ve likely encountered the concept of using a dedicated IP address. While there are benefits to using a dedicated IP address, it’s not a black and white issue. In some cases, a dedicated IP address can hurt more than it helps.

With almost every email service provider, you have two options for sending. The first option is to send from the shared IP pool. In these cases, your delivery can be affected by the behavior of other senders using that same IP address. If those senders are nefarious, it can drag down the IP address’s reputation. However, if those senders are good, it can lift the IP address’s reputation.

The second option is a dedicated IP address. With a dedicated IP address, if there are reputation issues with the IP address, you only have yourself to blame. The catch is that, in order for a dedicated IP address to work well, you have to have a consistent daily volume that’s high enough to build and maintain a reputation. That’s somewhere around 10,000-20,000 emails per day. You also have to be careful to slowly warm up the IP address by sending progressively more mail consistently over a set period of time. And, while there are no bad senders to drag your reputation down, it also doesn’t receive any benefit from good senders who could help buoy your IP reputation. With most email service providers, dedicated IP addresses cost more as well.

Finally, while IP reputation still plays a role, inbox providers are increasingly weighting domain reputation in conjunction with IP reputation. Since IP addresses are increasingly disposable, putting more reputation on domains helps mitigate spammers cycling through domains because new domains would have to build reputation. This also means that if you’re having widespread delivery issues, those problems could be attributable to either the IP address or your domain. So swapping IP addresses may help, but if your domain reputation is suffering, changing the IP address won’t help. You’d instead have to focus on cleaning up your domain reputation.

While dedicated IP addresses can be great under the right circumstances, it’s not a slam dunk. And, if you’re using a shared IP address, you’ll want to make sure that you’re monitoring it closely to ensure that other senders aren’t ruining its reputation or landing it on blacklists.

Bounce Handling

Emails bounce. There’s no way around it. The reasons for bounces vary, but the need to handle bounces gracefully is universal. Think of bounce handling as exception handling for email. When an exception happens, you don’t want it silently discarded. You want to know that it happened and what caused it so that you can fix it. The same goes for bounce handling with one caveat. With bounce handling, you can empower your users to fix most problems themselves. This simultaneously increases customer satisfaction and reduces support requests.

When an email bounces with a hard bounce, the most important step is to stop attempting delivery to that address. While some hard bounces may eventually start working again on their own, repeated bounces to the same address is a highly negative signal to inbox providers. From their perspective, it means you’re not keeping your lists clean and that there’s a good chance you’re a spammer.

Unfortunately, if you stop attempting delivery to an address, and that address begins working again, your recipients won’t be able to get their emails. That’s where bounce handling comes in. Using webhooks, your email service provider can automatically notify you of new bounces. Then, you can use that information to present alerts within your application that there were issues delivering the email. That way, your users can correct the issue, and then reactivate delivery.

Postmark makes this even easier with Rebound, a simple JavaScript snippet that can be customized and included in your application to proactively alert users to delivery issues so they can correct the issue before it leads to bigger problems or support requests.

Notification Management

With transactional emails, unsubscribing is less of an issue than it is with bulk promotional emails, but providing recipients a way to unsubscribe or manage the volume or types of transactional emails they receive is still great to do.

One-click unsubscribes for each type of email you send can make things convenient if recipients don’t want to receive email for certain types of notifications. Alternatively, providing a preference center for transactional emails is a great way to put more control in recipients’ hands. However, if you go the preference center route, keep the options to a minimum. Keep in mind that a page with an abundance of checkboxes can be overwhelming or confusing. So while granular control is nice, too many options can be counter-productive.

One of the best ways to cut down on frequent notifications is to offer options such as instant, daily, or weekly summaries. That way, you give recipients significant control over notifications to enable them to drastically reduce the quantity without overwhelming them with fine-grained control for dozens of different types of notifications.

Regardless of the method, recognize that giving control over the frequency of notifications can go a long way to helping your customers while also helping to reduce your overall volume of emails. It’s a win-win for your customers and your email costs.

Tools And Monitoring

While other facets of application development have made impressive strides in tooling, best practices, and reliability, email is still somewhat of a black box. With applications, you can monitor uptime, page load times, application performance, and countless other aspects. Since email inboxes are private, however, there’s no way to accurately measure real delivery information. Open and click rates can serve as decent proxies, but they’re still only proxies. Fortunately, there are some great tools that can complement each other and, together, provide a relatively clear picture of your email delivery.

This is key to appreciating our seventh and final tip:

7. Use available tools to monitor and improve your delivery

Just like you’d monitor your application’s uptime or performance, it’s equally important to monitor email delivery. While no one tool can tell you everything, a combination fo great tools can make a huge difference in ensuring that your email delivery is reliable and help troubleshoot those times when it isn’t.

In order to have good coverage, you’ll want to use the following tools and pay close attention to patterns over time.

  1. Monitor trends in open and click rates for your emails. It’s only a proxy, but it’s good for relative historical numbers. For example, if you notice your open or click rates falling dramatically over time, that’s often an early warning sign that you may be encountering delivery issues. Since an email can’t be opened if it’s not delivered successfully, decreases in open rates can be caused by delivery issues.
  2. Gmail offers Postmaster Tools which can help you gauge IP and domain reputation to understand which email sources may be having delivery issues. This is a great forensic tool to turn to when you suspect that you may be having delivery issues. It only provides insight from Gmail, but that’s often a good-enough proxy for understanding how your reputation might look to the other providers as well.
  3. Use the MXToolBox Blacklist Check to see if your domain or IP address has landed on a blacklist. If you’re on a shared IP address, you probably want to set up a permanent and automatic monitor for that shared IP address so you’ll know sooner if you end up on a blacklist.
  4. Use a tool like GlockApps or 250ok to monitor inbox placement for your emails. It’s important to keep in mind that these tools rely on seed lists to test delivery. That is, since they can’t test delivery to real recipients’ inboxes, they have to use test addresses as a proxy. Like with most email delivery tools, this isn’t a perfect science, but in practice, it’s close enough to still be very useful at gauging delivery quality.

The more monitoring and alerting you have in place, the sooner you’ll know about problems and be able to correct them. Often, poor email delivery can be an invisible problem that only surfaces when support requests begin to show up, but by then, you could have already had 100’s or 1,000’s of emails end up in spam folders or not arrive at all. Just like you wouldn’t want your customers to be the ones alerting you to application downtime, you don’t want them to be the first to alert you to potential delivery issues either.

Bring It All Together

You have plenty of options when it comes to sending email. You can even set up a server and Mail Transfer Agent (MTA) and send for yourself if you’d like, but you’ll be taking on a lot of responsibility and overhead. Managing reputation is difficult. Establishing relationships with ISP’s is even harder.

Unless sending email is your core service, you’re often better off turning to an email service provider. Even then, it’s important to recognize that great delivery can’t be a foregone conclusion. Even though all ESP’s claim that they provide great delivery, that’s not always the case. In most cases, when you’re evaluating ESP’s, you’ll be much better off if you use the tools mentioned above to get quantifiable delivery information for yourself rather than taking their word for it. This is true whether you use a shared IP address or a dedicated IP address. Great delivery isn’t automatic, and you should always be gathering hard data on your delivery.

Regardless of how you handle email, make sure to treat it as an extension of your application’s user experience rather than an afterthought. Take time to write concise and helpful emails, and do everything possible to seamlessly integrate it into the user experience. Be judicious about firing off too many emails, and give your users the ability to tune email notifications for their needs.

Finally, monitor your transactional email delivery like you would any other service in your stack. If your users aren’t receiving critical emails like password resets and invoices, you’ll be losing goodwill, and your support costs will increase. Don’t let your email delivery fail quietly. Make sure that you’re notified quickly and loudly of any potential delivery issues long before it gets bad enough for your customers to email you.

Your application can’t work without email. While it’s not as easy to measure or monitor as most aspects of your application, it’s still a critical piece of functionality that deserves your full attention. Invest the time in making your email experience great, and you’ll unquestionably reap the rewards.

(ra, il)
Categories: Web Design

Introducing the Indexing API for job posting URLs

Google Webmaster Central Blog - Tue, 06/26/2018 - 04:53

Last June we launched a job search experience that has since connected tens of millions of job seekers around the world with relevant job opportunities from third party providers across the web. Timely indexing of new job content is critical because many jobs are filled relatively quickly. Removal of expired postings is important because nothing's worse than finding a great job only to discover it's no longer accepting applications.

Today we're releasing the Indexing API to address this problem. This API allows any site owner to directly notify Google when job posting pages are added or removed. This allows Google to schedule job postings for a fresh crawl, which can lead to higher quality user traffic and job applicant satisfaction. Currently, the Indexing API can only be used for job posting pages that include job posting structured data.

For websites with many short-lived pages like job postings, the Indexing API keeps job postings fresh in Search results because it allows updates to be pushed individually. This API can be integrated into your job posting flow, allowing high quality job postings to be searchable quickly after publication. In addition, you can check the last time Google received each kind of notification for a given URL.

Follow the Quickstart guide to see how the Indexing API works. If you have any questions, ask us in the Webmaster Help Forum. We look forward to hearing from you!

Posted by Zach Clifford, Software Engineer
Categories: Web Design

How To Get To Know Your Users

Smashing Magazine - Tue, 06/26/2018 - 04:50
How To Get To Know Your Users How To Get To Know Your Users Lyndon Cerejo 2018-06-26T13:50:41+02:00 2018-07-11T12:36:25+00:00

(This article is kindly sponsored by Adobe.) Users are at the heart of User-Centered Design (UCD), with designers focusing on actual users and their needs throughout the design process. The goal is to design interfaces and products that work for those users.

As much as we would like to think that our users are like us, they are not. Anyone involved in the creation of a product or an interactive experience, be it a site, system, or an app, is not a typical user — and that includes all the business stakeholders, designers, and developers. As advocates of users, we often have to remind ourselves and others of one of the primary UCD commandments:

“Know thy user, and YOU are not thy user.”

Arnie Lund

How well do we really know our users? Traditional UX research focuses on user needs, expectations, and goals, most of what is visible and observable, like the tip of an iceberg. That works well for designing user experiences that meet user needs, allowing them to complete tasks and achieve their goals. But if the design needs to persuade users to take some action, we need first to identify what motivates or inhibits them.

(Large preview)

This article will look at how going below the surface during user research helps us really understand what triggers our users, and how those deeper insights will help us design for persuasion.

Traditional UX Research

One of the first steps in design is identifying and researching who we are designing for so that we can focus on the groups of users that matter the most, and ensure that the design meets or beats their expectations.

User research is a great way for us to get a deep understanding of the people we are designing for. User interviews and contextual inquiries, focus groups, and surveys are commonly used research techniques to understand actual users, along with their needs, expectations, and goals. Sometimes, during user interviews, feelings or emotions may be mentioned in the conversation, but are not usually the central focus of the research.

User interviews that are based on task analysis usually focus on:

  • Who they are (profile);
  • What they do, when and where (context);
  • Why they do it (needs, goals, tasks) ;
  • How they do it (experience);
  • What they like or dislike (frustrations).

This research is valuable; it gives the entire team a shared understanding of actual users and builds empathy during the design process. One way to bring these users to life and make it easy for everyone to visualize the actual end users of the product or service is by creating user personas. A persona is a fictional, yet realistic description of a typical user from a key user group. There are many variations and formats, but the description usually includes personal, professional and technical information about the user, along with their knowledge, experience, goals, and frustrations related to the product or service. Give them a name, and put a face to that name, and you get to meet “Soccer Mom Sue” or “Function Over Form Fitz” (shown below).

Car buyer persona Fitz Grant, extrapolated from User Experience Takeaways From Online Car Shopping, based on persona development questions from usability.gov. (Large preview)

One way to keep these users top of mind is to put up posters of personas in the work area to remind us of who we are ultimately designing for, during all stages of the project –from defining requirements through design and development.

When prioritizing new features and enhancements, the deciding factor is no longer subjective, or based on personal preferences and likes of HiPPOs (Highest Paid Person’s Opinion), but focused on the user: “Will this new feature entice Fitz to use our site to configure a car?” or “Will this make it easier for Susan to compare our car models?”. Fitz and Susan also serve as constant reminders to designers who try to create “bleeding edge” designs (“Would Fitz find this interface intuitive?”), and to developers tempted to incorporate the latest technology (“Will Susan’s computer support this new technology?”). In time, Fitz and Susan become entrenched in the project, helping us build for them, not us, resulting in a solution that is useful, usable and meets their needs.

This works well to create products and interfaces that are functional, efficient and usable, allowing users to complete their tasks and achieve their goals.

Behavioral Research For Persuasive Design

Usability or task-oriented user research mainly focus on the cognitive level, how and why our users think, reason and act the way they do. We may sometime scratch the surface and get some feelings from these users, but we don’t usually don’t probe deep into their emotional experiences.

Behavioral research builds on traditional research approaches since persuasive design is aimed at changing behavior. Some examples of behavior changing actions in design include persuading users to try or buy a product or service (get a quote, schedule a test-drive), start or stop a behavior (start exercising, stop smoking), or convincing them to act on a belief or information (donate, vote).

If you are already using persuasive design techniques, quantitative data can identify which tactics are working for your users. But there is a wealth of information that you can get from qualitative methods like user interviews. One-on-one interviews allow researchers to dig below the cognitive level, and reach the emotional level to get to users’ feelings and beliefs. Advertisers and marketers have been doing this for years and you can see the results in the campaigns you are bombarded with daily.

During behavioral research, researchers focus on the users’ feelings, emotions, motivations, and barriers related to the action that is being triggered. For a common behavior target of getting the user to buy something, look for how users feel during the purchase journey. As an extreme example, while probing the purchase of “stigmatized products” (tobacco, alcohol, or personal enhancement products), users may have feelings of embarrassment, shame, over even guilt associated with the purchasing experience.

How do you get the user to discuss their emotions? Psychotherapists go through years of education and training before they master the art of unearthing mental, emotional, and behavioral issues to help their clients. Since our user interviews do not have the same life-changing impact or consequences, we do not require the same level of rigor. However, this is not something you can learn overnight, but you can learn more through courses like HFI’s PET Design. This article will not attempt to teach you how to do it, but introduce how these user interviews differ.

User interviews that focus on emotions, beliefs, and feelings focus on the intended action and:

  • How users feel (about the intended action);
  • What emotions are evoked in the process;
  • What are their emotional motivations to complete the intended action;
  • What are their barriers that may prevent them from taking the intended action;
  • Their values and beliefs related to the intended action;
  • Social or cultural factors that may impact the experience.

In a nutshell, the user interviews still start off by building rapport and learning about the user. After that, the interview focuses on the desired action by using a scenario and stimulus (e.g. buying a car), and a few closed-ended questions to get the participant on a topic (e.g. “When was the last time you bought a car?” and “Did you research your car online?”). Once the user is in the frame of mind of the scenario, the questions transition to open-ended questions that probe for emotions (“Can you describe that experience?” “What did that feel like?” “Why did you feel that way?”). The key is to guide the user from thinking to feeling, from facts and reason to emotions, and probing the subconscious, looking for motivators and barriers to them taking the desired action in the scenario. Interviewing techniques of active listening, not leading the user, and not being judgmental, all hold true.

Car Shopping Example

Let’s use an example of buying a car and see how we can really get to know the users. For the sake of simplicity, let’s focus on one persona (shown previously): Function Over Form Fitz, created based on the persona development questions from usability.gov. We’ll follow Fitz as he contemplates buying a car.

Fitz Grant is a 44-year old IT director who recently moved to Georgia to escape the cold winters of New Jersey. He, his homemaker wife, and two middle-school sons have always been a single-car household, but without access to reliable mass transit in his suburb, he is looking to buy a car for his daily 90-minute work commute.

The persona and scenario above are common for task analysis oriented user research, including their needs and frustrations. However, if you focus on probing the emotional aspect of buying a car, user interviews may uncover the following themes for barriers and motivations:

(Large preview)

Barriers (strongest to weakest, strength shown using a battery indicator):

  • Fear of failure
    • ‘I’m scared of making the wrong choice with such a high price tag and letting down my family’
  • Fear of being manipulated
    • ‘I detest the thought of having to deal with aggressive sales tactics or being pressured into unnecessary upgrades or warranties’
  • Fear of compromise
    • ‘I will be disappointed if I have to compromise on features I need because of dealer inventory or cost’
  • Fear of being judged
    • ‘I don’t like being thought of a cheapskate; I am just looking for value and a good deal.’

Motivators (strongest to weakest):

  • Safety
    • ‘I need to be safe and secure in my car, especially with all the distracted driving around me’
  • Control
    • ‘I would like to be able to customize options that are important to me; I don’t want the “technology package”, I only need the safety features and Bluetooth connectivity for my calls’
  • Value for Money
    • ‘For the price I will be paying, I should get some ongoing service benefits like free oil changes from the dealer.’
  • Excitement
    • ‘I get a high when I can combine incentives, rebates, and discounts to get a great deal!’
  • Knowledge
    • ‘I like to be prepared and research on my own, so I can be confident with my choices’
  • Self-Image
    • ‘I’m technology savvy to be able to do everything online, except the test drive!’

Armed with this additional information, we can orchestrate the experience to weaken the barriers and strengthen the motivators to get users like Fitz to take the intended action, like requesting a quote on a car currently in stock in the dealer lot. To weaken Fitz’s “Fear of Failure” barrier, a site could highlight that they offer a 24-hour test drive, which allows him to drive it home, see how it fits in his garage, and sleep over the decision.

Given that safety is such a strong motivator, the site could frame the conversation around safety through a combination of color, imagery, content, third-party award badges (IIHS Top Safety Pick+), user testimonials, and interactive simulations. Function Over Form Fitz may even change to Safety Minded Fitz.

This was a hypothetical example of how understanding your users’ barriers and motivations can help you preemptively address them through design, and make it easier for your user to take the intended action. It is important that we do this in an ethical manner, without resorting to pressure, or deceit.


Traditional user research helps us design meet users’ transactional and usability needs to make sure they can use a design. Research for persuasive design digs below the surface thinking level to the feeling level, and moves beyond the rational to the emotional level, to influence users to want to use the design. Getting to know your users at a deeper level will help you use psychology in design to get users to engage in behaviors they were already considering — with you instead of a competitor.

Further Reading

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(ms, ra, il)
Categories: Web Design

What Newsletters Should Designers And Developers Be Subscribing To?

Smashing Magazine - Mon, 06/25/2018 - 05:00
What Newsletters Should Designers And Developers Be Subscribing To? What Newsletters Should Designers And Developers Be Subscribing To? Ricky Onsman 2018-06-25T14:00:48+02:00 2018-07-11T12:36:25+00:00

We put out the call on Twitter and Facebook: “What email newsletters are you following these days?” The task of compiling your (many, many) responses has fallen to me.

I should disclose that I have a vested interest in that I currently edit a bi-weekly email newsletter for a conference organizer, UX Australia. In fact, over the years, I’ve edited dozens of email newsletters — some more successful than others.

Anyway, for the purpose of this article, we need to make a few things clear.

First, the newsletters that were most namechecked are included here in their own section: “The Favorites.” The eight that made this list comprise a formidable toolkit of newsletters for any web designer or front-end developer.

Second, I found that there are several types of newsletters which I tried to include representatively:

  • Aggregated links
  • Editorial
  • Company/product/service/event promotions
  • Tutorials
  • Broad industry news
  • Specific tech news
  • Combination of any of the above

Third, there is a lot of overlap in linked content from one newsletter to the next. That’s to be expected when everyone wants to break news and aggregate link-worthy articles.

Fourth, I focused on great newsletter content. It could be presented as sophisticated HTML with videos and infographics, or it could be no nonsense plain text with minimal descriptions, as long as the content — including how reliable any links are — is good.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

Lastly, I tried to keep the focus to only actual newsletters that arrive in your email inbox, and that focus on what web designers and developers do. That’s not to deny the value of newsletters that give designers and devs a broader context for their work; I just had to draw the line somewhere.

I also excluded newsletters that claim to be bi-weekly but have produced just four issues in the last nine months (even if those four issues were very good) and “occasional updates.”

So, let’s see what we have. Buckle up.

The Favorites Smashing Magazine

Well, of course a lot of people mentioned our own newsletter, and we’re proud and grateful for our readers’ loyalty. I’ll also point out that ours is a rare example of all of the types of newsletter mentioned above, and one with a strong, clear voice. Just sayin’. Twice a month.

CSS Layout News

A high-quality weekly collection of tutorials, news and information on all things CSS Layout, edited by Rachel Andrew, editor-in-chief of Smashing Magazine, web developer, writer, speaker, author of The New CSS Layout, co-founder of Perch CMS. Weekly.


It used to be primarily about CSS, but over the years CSS-Tricks has become about all things web design and development, and very popular it is, too. The newsletter is a great way to keep up, as Chris Coyier and his team regularly pull out new insights and clever styling techniques. Weekly.

CSS Weekly

Front-end dev Zoran Jambor’s hand-picked selection of CSS new, views, tips and techniques from a very wide range of sources, on everything from Unicode patterns to accessibility to security and query feature management. Weekly.

Layout Land

Mozilla Designer and Developer Advocate Jen Simmons knows a lot about CSS Grid and, lucky for us, she’s happy to share. The chief purpose of the newsletter is to let subscribers know about new videos available on the YouTube channel of the same name, but the editorial comments are great value in themselves. Monthly.

Web Development Reading List

Anselm Hannemann, front-end developer and founder of Colloq, compiles a mix of dev & design news, commentary, techniques, tools and broader work/life matters every week. There’s usually a short but pithy editorial, and then up to a dozen links with a paragraph about why it might be interesting to you. He also contributes monthly WDRL issues on Smashing Magazine. Highly readable.

UX Design Weekly

A hand-picked list of the best user experience design links, curated by Kenny Chen. A clearly defined balance of articles, tools & resources, media, portfolios and news — all focused on UX design. Has the twin virtues of feeling hand-picked and providing a great range of topics. Weekly.

JavaScript Weekly

From the Cooperpress family of publications, this is one of their strongest newsletters, a well chosen round-up of JS news and articles. The first set of article links get a brief and relevant paragraph description, while the rest are divided into Jobs, Tutorials, Opinions, Videos, Code and Tools with one line descriptions. Comprehensive and readable. Weekly.

Accessibility A11yWeekly

David A. Kennedy wrangles WordPress themes for Automattic, is an accessibility evangelist, and compiles this very useful dose of links to web accessibility news, resources, tools and tutorials. Weekly.


WebAIM has become a major and reliable resource for guidance and discussion relating to web accessibility. The newsletter includes featured articles, technical tips, resources, and questions from WebAIM's discussion forum. The featured articles are both from WebAIM staff and key web accessibility figures around the world. Monthly.

Animation UI Animation Newsletter

Keep up to date on the best web animation, motion design, and UX resources on the web. Subscribe for a weekly collection of curated tutorials and articles — plus advice on how to make web animation work for you. Written and curated by designer, interface animation consultant, speaker and author Val Head. Weekly.

Animation at Work

A hand-picked selection of articles, videos, book reviews, and other goodies pertaining to the wonderful worlds of web animation and motion design (which until last month was called “Web Animation Weekly”). It’s Rachel Nabors’ project, but the hands doing the picking belong to a range of guest editors who can be nominated by readers. Monthly.

Email Really Good Emails

RGE aims to be “the best showcase of email design and resources on the web”. A 3,300 strong gallery of examples, excellent articles, and links to a lot of resources suggest they really might be “the epicenter of the email earthquake, but in a good and happy-shakey kind of way.” You’d expect this lot to have a good newsletter, and they do. Monthly.


Email design and strategy studio Action Rocket’s very useful collection of links to articles on design, HTML and creative concepts for email drawn from their own work and other pros in the field. Weekly.

Emerging Technology Dev Diner

Edited by Patrick Catanzariti, this is very well organized and aims to keep web developers inspired and in the know, with links to the latest in VR, AR, wearables, the Internet of Things, AI, robotics and all sorts of other new and emerging tech. Weekly.

Front-End Friday Front-end

ZenDev founder, front-end consultant and trainer Kevin Ball covers CSS-focused developments, industry news, tutorials and dev resources. Each newsletter has an intro from Kball followed by three sections: CSS, JavaScript, and Other Awesomeness (each with five items with handcrafted descriptions). Weekly.

Friday Front-End

This is a nice, simple format that works very well. CSS developer Scott Vandehey tweets a daily link to an interesting front-end article. At the end of the working week, these links are packed with a short description into an email. The selection is a thoughtful mix, so it feels less like a bunch of random links and more like a summary of key stories. Weekly.

Frontend Focus

Just one of a suite of targeted publications by Peter Cooper, software developer, podcaster and Publisher-in-Chief of tech niche newsletter company Cooperpress. This one focuses on HTML, CSS, WebGL, Canvas and front end-related news updates, articles and tutorials. Others cover JavaScript, Ruby, databases, mobile and more. Weekly.

Web Design Weekly

A newsletter and blog by Jake Bresnehan, that will help you to stay on top of all things web design and front end development. Links to tips & tricks, in depth articles, interviews, tools & resources, jobs and the occasional quirky story. Weekly.

Web Tools Weekly

A front-end development and web design newsletter with a focus on tools, curated by Louis Lazaris. Each issue features a brief tip or tutorial, followed by a round-up of apps, scripts, plugins, and other resources to help front-end developers solve problems and be more productive. Weekly.

Front End Front

A crowd curated feed of front-end related articles, by brothers Stelian and Sergiu Firez. Site visitors vote up articles linked to from the feed. The newsletter is a list of the top 10 voted most recent articles. Topics are what you’d expect: CSS, JavaScript, HTML, images, performance — mostly technical, some bigger picture. Definitely some links not seen elsewhere. Weekly.

JavaScript ES.next News

The latest in JavaScript and cross-platform tools. Get five ECMAScript.next links delivered to your inbox, every week. Not afraid to mix highly technical code-focused pieces with bigger picture industry stories. Curated by Dr. Axel Rauschmayer and Johannes Weber. Weekly.

Typography Adventures in Typography

A love letter to the written word, by Robin Rendle. Topics include “calligraphy, lettering, display type, micro type, books about fonts, type specimens, neon lights, posters, morse code, stamps, literature, web design, and books about seeds”. The format is that of a proper letter written by Robin to you which is rather lovely when it’s from someone so passionate. Weekly.

Fresh Fonts

Curated by Noemi Stauffer and featuring the latest free and open-source fonts, the most solid new typefaces by indie foundries and discount codes, this is a concise and well-compiled exploration of (mostly) newly released fonts and how you can get them. Twice a month.

Coffee Table Typography

A love for words, letters, language, and coffee. This digest of resources, articles and knowledge of typography — in design, on the web, or books — is curated by Ricardo Magalhães. More than just a set of links, Ricardo digs into the articles he’s linking to, whether it’s a new font release or a deeper look into the role of typography on the web. Once or twice a month.

User Experience UX Collective

Founded by Fabricio Teixeira and Caio Braga, this has become a bit of a UX behemoth, publishing a high volume of original articles and aggregated material on a range of UX and design related topics. Impressive output in terms of both quantity and quality. The newsletter reflects that. Weekly.


Designer and developer Ste Grainer founded this “love letter to user experience design, front-end development, building products, and making the world a better place.” His newsletter collects links to the latest UX articles and resources, as well reminders of older, good resource links. Monthly.

User Interface Engineering

Jared Spool has a hard-won global reputation as a consultant, trainer, speaker and writer on usability, UI and UX. It’s not surprising, then, that he realizes the great value of older content. The UIE newsletter is as likely to link to an article from five years ago as yesterday, if it’s still relevant — and, with user-centric topics, that’s often the case. Smart and useful. Weekly.

UX Booth

By and for the user experience community, founded by Matthew Kammerer, David Leggett and Andrew Maier, for a readership mostly of beginning-to-intermediate user experience and interaction designers. The newsletter highlights one new inhouse article a week, plus links to three or four external UX articles and the occasional (often remote) job listing. Concise, focused, digestible and reliable. Weekly.


Founded by Pabini Gabriel-Petit in 2005, this provides insights and inspiration to experienced professionals working in every aspect of UX, as well as beginners. The newsletter focuses tightly on new content on the website, but when you publish up to six articles, interviews, panel discussions, and technical pieces a week, that makes for a good newsletter. Weekly.

Web Design An Event Apart Digest

In 1997, Jeffrey Zeldman and Brian M Platz started a mailing list called A List Apart. It evolved into one of the world’s leading magazine sites “for people who make websites” and spawned the An Event Apart conference series and book publisher A Book Apart. The Digest wraps all of this activity up and adds links to other interesting articles, videos and resources. Monthly.

Web Designer News

Built “to provide web designers and developers with a single location to discover the latest and most significant stories on the Web.” Content includes “quality news, fresh tools and apps, case studies, code demos, inspiration posts, videos and more”. It is extremely comprehensive, and posts links every couple of hours. There is a user voting system, and the most shared items are featured in the newsletter. Daily.


In 10 years, WDD has grown from a blog into a genuine online magazine for web designers — and developers, content authors, specialist and generalists. Drawing on a stable of inhouse writers and external contributors, including some high-profile industry type. The newsletter highlights 10 or so items ranging across news, product info, design, code and WDD’s monthly round-up. Weekly.


Since 2012, Sidebar has been collecting links about UI design, typography, CSS, user research, and all other facets of design. It's now a trusted resource for thousands of designers across the world to stay on top of the latest news, trends, and resources. Each newsletter can include from three to a dozen links. Daily.


The website was launched in 2007 as an inspirational hub for web designers. As web design trends became more advanced and more tools became available it evolved into a web design magazine. Articles are usually published daily, which makes for a satisfyingly full email newsletter. Weekly.

Responsive Design Weekly

Web developer Justin Avery’s side project ResponsiveDesign.is has become a go-to site dedicated to providing beginners and advanced users tips, tricks, inspiration and resources for responsive design projects. The newsletter is equally popular and a very convenient way to keep with responsive design news. Weekly.

The Web Designer Newsletter

A heady mix of image-heavy product placement, sponsored items, free resources and links to articles, this newsletter regularly comes up with links to articles I don’t see elsewhere. I don’t know who is behind it (the very brief website just says “curated by designers for designers”), but it’s hard to ignore. Weekly.

Design Systems

A curated publication full of interesting, relevant links. I would call this an example of an educated, focused and informed newsletter of links to web resources that genuinely advance thinking on design systems. Providing just an article title and link makes it feel robotic and impersonal, but maybe that’s an approach you’ll like. Weekly (more or less).

Web Field Manual

A curated list of resources focused on documenting knowledge for designing experiences and interfaces on the web. It is an ever-expanding collection of knowledge and inspiration for web designers, by web designers — namely Jon Yablonski, Garrett Wieronski, and Geoff Tice. The Dispatch keeps you up to date. Weekly.

Web Design Update

This is an oldie but a goodie, a plain text email digest disseminating news and information about web design and development with an emphasis on user experience, accessibility, and web standards. Around since 2002 (with a name change), it’s linked to the university-based Web Design Reference archive, and it is still edited by Laura Carlson. Weekly.

Web & Tech Versioning

SitePoint is a publisher of books, articles, videos and community forums on all aspects of web design and development. It produces several weekly newsletters, but this is the cream of the crop, curated by Adam Roberts. Short, sharp, timely, useful and often linking to items of industry interest that others miss. Daily.

Hacking UI

David Tintner and Sagi Shrieber’s favorite articles about design, front-end development, technology, startups, productivity and the occasional inspirational life lesson. They pack in a lot of links, which might be why they list only the article title linked to the source. With such a broad remit, there is a lot of overlap with other newsletters, but it’s all in one package. Weekly.

Hacker Newsletter

This side project of MailChimp’s Kale Davis links to stories on tech startups, programming developments and other stories featured on Hacker News. That means this is another list of links, with not much to say where you should direct your attention. Still, those links are pretty good. Weekly.

Pony Foo Weekly

A newsletter about the open web, highlighting the most important news about the web every Thursday. Founded by developer, author and speaker Nicolás Bevacqua, and now with a team of contributing authors, it also has lots of links to current articles about the web and tech. Weekly.

Offscreen Dispatch

Offscreen is a print magazine that focuses on people working on the web and with technology in general, edited and published by Kai Brach, a former web designer. Dispatch is Kai’s email newsletter offshoot in which he highlights products, news, events and insights for the discerning web worker. Classy and well selected. Weekly.

The History of the Web

Jay Hoffmann had the bright idea of collecting the moments and stories that make up web history to create an online timeline. Out of that came an email newsletter pointing to one or more of these stories. Highly readable, very informative, and often surprising. Weekly.

WordPress MasterWP Weekly

Alex and Ben produce this newsletter for WordPress professionals, with editorial and links to apps, tools and resources. It’s not all about WordPress, though — it’s what WordPress pros need to know, so there’s general small business, lifestyle and industry articles as well as WP-specific items, including technical pieces. Weekly.


Curated by Cristian Antohe and Bianca Petroiu, this one carries more links to WordPress theme and plugin releases and reviews, tutorials, podcasts and videos — as well as more business, freelance and industry news and articles of interest to WordPress professionals and enthusiastic amateurs. Weekly.


While I was compiling this article, I realized I have my own go-to set of people whose email newsletters I really look forward to. It’s not because they’re necessarily “the best” — it’s more that their choices of content, tone and focus strike a strong chord with me.

I’m sure you have your own, but here — apart from those who come up in the preceding sections — are the individuals whose newsletters I look forward to landing in my inbox.

And that completes our latest round-up of email newsletters for web designers and developers.

Did I say “completes”? Of course, this list is not complete! For one thing, it probably doesn’t have your favorites. Add them — preferably with a working link — in the comments below.

(vf, ra, il)
Categories: Web Design

New URL inspection tool & more in Search Console

Google Webmaster Central Blog - Mon, 06/25/2018 - 04:39

A few months ago, we introduced the new Search Console. Here are some updates on how it's progressing.

Welcome "URL inspection" tool

One of our most common user requests in Search Console is for more details on how Google Search sees a specific URL. We listened, and today we've started launching a new tool, “URL inspection,” to provide these details so Search becomes more transparent. The URL Inspection tool provides detailed crawl, index, and serving information about your pages, directly from the Google index.

Enter a URL that you own to learn the last crawl date and status, any crawling or indexing errors, and the canonical URL for that page. If the page was successfully indexed, you can see information and status about any enhancements we found on the page, such as linked AMP version or rich results like Recipes and Jobs.

URL is indexed with valid AMP enhancement

If a page isn't indexed, you can learn why. The new report includes information about noindex robots meta tags and Google's canonical URL for the page.

URL is not indexed due to ‘noindex’ meta tag in the HTML

A single click can take you to the issue report showing all other pages affected by the same issue to help you track down and fix common bugs.

We hope that the URL Inspection tool will help you debug issues with new or existing pages in the Google Index. We began rolling it out today; it will become available to all users in the coming weeks.

More exciting updates

In addition to the launch of URL inspection, we have a few more features and reports we recently launched to the new Search Console:

Thank you for your feedback

We are constantly reading your feedback, conducting surveys, and monitoring usage statistics of the new Search Console. We are happy to see so many of you using the new issue validation flow in Index Coverage and the AMP report. We notice that issues tend to get fixed quicker when you use these tools. We also see that you appreciate the updates on the validation process that we provide by email or on the validation details page.

We want to thank everyone who provided feedback: it has helped us improve our flows and fix bugs on our side.

More to come

The new Search Console is still beta, but it's adding features and reports every month. Please keep sharing your feedback through the various channels and let us know how we're doing.

Posted by Roman Kecher and Sion Schori - Search Console engineers
Categories: Web Design