emGee Software Solutions Custom Database Applications

Share this

Web Design

Working Together: How Designers And Developers Can Communicate To Create Better Projects

Smashing Magazine - Tue, 04/24/2018 - 07:50
Working Together: How Designers And Developers Can Communicate To Create Better Projects Working Together: How Designers And Developers Can Communicate To Create Better Projects Rachel Andrew 2018-04-24T16:50:19+02:00 2018-04-24T15:30:31+00:00

Among the most popular suggestions on Smashing Magazine’s Content User Suggestions board is the need of learning more about the interaction and communication between designers and developers. There are probably several articles worth of very specific things that could be covered here, but I thought I would kick things off with a general post rounding up some experiences on the subject.

Given the wide range of skills held by the line-up at our upcoming SmashingConf Toronto — a fully live, no-slides-allowed event, I decided to solicit some feedback. I’ve wrapped those up with my own experience of 20 years working alongside designers and other developers. I hope you will add your own experiences in the comments.

Some tips work best when you can be in the same room as your team, and others are helpful for the remote worker or freelancer. What shines through all of the advice, however, is the need to respect each other, and the fact that everyone is working to try and create the best outcome for the project.

Working Remotely And Staying Connected

The nomadic lifestyle is not right for everyone, but the only way to know for sure is to try. If you can afford to take the risk, go for it. Javier Cuello shares his experience and insights from his four years of travel and work. Read article →

For many years, my own web development company operated as an outsourced web development provider for design agencies. This involved doing everything from front-end development to implementing e-commerce and custom content management solutions. Our direct client was the designer or design agency who had brought us on board to help with the development aspect of the work, however, in an ideal situation, we would be part of the team working to deliver a great end result to the end client.

Sometimes this relationship worked well. We would feel a valued part of the team, our ideas and experience would count, we would work with the designers to come up with the best solution within budgetary, time, and other constraints.

In many cases, however, no attempt was made to form a team. The design agency would throw a picture of a website as a PDF file over the fence to us, then move on to work on their next project. There was little room for collaboration, and often the designer who had created the files was busy on some other work when we came back with questions.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

It was an unsatisfactory way to work for everyone. We would be frustrated because we did not have a chance to help ensure that what was designed was possible to be built in a performant and accessible way, within the time and budget agreed on. The designer of the project would be frustrated: Why were these developers asking so many questions? Can they not just build the website as I have designed? Why are the fonts not the size I wanted?

The Waterfall versus Agile argument might be raised here. The situation where a PDF is thrown over the fence is often cited as an example of how bad a Waterfall approach is. Still, working in a fully Agile way is often not possible for teams made of freelancers or separate parties doing different parts of the work. Therefore, in reading these suggestions, look at them through the lens of the projects you work on. However, try not to completely discount something as unworkable because you can’t use the full process. There are often things we can take without needing to fully adopt one methodology or another.

Setting Up A Project For Success

I came to realize that very often the success of failure of the collaboration started before we even won the project, with the way in which we proposed the working relationship. We had to explain upfront that experience had taught us that the approach of us being handed a PDF, quoting and returning a website did not give the best results.

Projects that were successful had a far more iterative approach. It might not be possible to have us work alongside the designers or in a more Agile way. However, having a number of rounds of design and development with time for feedback from each side went a long way to prevent the frustrations of a method where work was completed by each side independently.

Creating Working Relationships

Having longer-term relationships with an agency, spanning a number of projects worked well. We got to know the designers, learned how they worked, could anticipate their questions and ensure that we answered them upfront. We were able to share development knowledge, the things that made a design easier or harder to implement which would, therefore, have an impact on time and budget. They were able to communicate better with us in order to explain why a certain design element was vital, even if it was going to add complexity.

For many freelance designers and developers, and also for those people who work for a distributed company, communication can become mostly text-based. This can make it particularly hard to build relationships. There might be a lot of communication — by email, in Slack, or through messages on a project management platform such as Basecamp. However, all of these methods leave us without the visual cues we might pick up from in-person meetings. An email we see as to the point may come across to the reader as if we are angry. The quick-fire nature of tools such as Slack might leave us committing in writing something which we would not say to that person while looking into their eyes!

Freelance data scientist Nadieh Bremer will talk to us about visualizing data in Toronto. She has learned that meeting people face to face — or at least having a video call — is important. She told me:

“As a remote freelancer, I know that to interact well with my clients I really need to have a video call (stress on the video) I need to see their face and facial/body interactions and they need to see mine. For clients that I have within public transport distance, I used to travel there for a first ‘getting to know each other/see if we can do a project’ meeting, which would take loads of time. But I noticed for my clients abroad (that I can’t visit anyway) that a first client call (again, make sure it’s a video-call) works more than good enough.

It’s the perfect way to weed out the clients that need other skills that I can give, those that are looking for a cheap deal, and those where I just felt something wasn’t quite clicking or I’m not enthusiastic about the project after they’ve given me a better explanation. So these days I also ask my clients in the Netherlands, where I live, that might want to do a first meeting to have it online (and once we get on to an actual contract I can come by if it’s beneficial).”

Working In The Open

Working in the open (with the project frequently deployed to a staging server that everyone had access to see), helped to support an iterative approach to development. I found that it was important to support that live version with explanations and notes of what to look at and test and what was still half finished. If I just invited people to look at it without that information we would get lists of fixes to make to unfinished features, which is a waste of time for the person doing the reporting. However, a live staging version, plus notes in a collaboration tool such as Basecamp meant that we could deploy sections and post asking for feedback on specific things. This helped to keep everyone up to date and part of the project even if — as was often the case for designers in an agency — they had a number of other projects to work on.

There are collaboration tools to help designers to share their work too. Asking for recommendations on Twitter gave me suggestions for Zeplin, Invision, Figma, and Adobe XD. Showing work in progress to a developer can help them to catch things that might be tricky before they are signed off by the client. By sharing the goal behind a particular design feature within the team, a way forward can be devised that meets the goal without blowing the budget.

Zeplin is a collaboration tool for developers and designers Scope Creep And Change Requests

The thing about working in the open is that people then start to have ideas (which should be a positive thing), however, most timescales and budgets are not infinite! This means you need to learn to deal with scope creep and change requests in a way that maintains a good working relationship.

We would often get requests for things that were trivial to implement with a message saying how sorry they were about this huge change and requests for incredibly time-consuming things with an assumption it would be quick. Someone who is not a specialist has no idea how long anything will take. Why should they? It is important to remember this rather than getting frustrated about the big changes that are being asked for. Have a conversation about the change, explain why it is more complex than it might appear, and try to work out whether this is a vital addition or change, or just a nice idea that someone has had.

If the change is not essential, then it may be enough to log it somewhere as a phase two request, demonstrating that it has been heard and won’t be forgotten. If the big change is still being requested, we would outline the time it would take and give options. This might mean dropping some other feature if a project has a fixed budget and tight deadline. If there was flexibility then we could outline the implications on both costs and end date.

With regard to costs and timescales, we learned early on to pad our project quotes in order that we could absorb some small changes without needing to increase costs or delay completion. This helped with the relationship between the agency and ourselves as they didn’t feel as if they were being constantly nickel and dimed. Small changes were expected as part of the process of development. I also never wrote these up in a quote as contingency, as a client would read that and think they should be able to get the project done without dipping into the contingency. I just added the time to the quote for the overall project. If the project ran smoothly and we didn’t need that time and money, then the client got a smaller bill. No one is ever unhappy about being invoiced for less than they expected!

This approach can work even for people working in-house. Adding some time to your estimates means that you can absorb small changes without needing to extend the timescales. It helps working relationships if you are someone who is able to say yes as often as possible.

This does require that you become adept at estimating timescales. This is a skill you can develop by logging your time to achieve your work, even if you don’t need to log your time for work purposes. While many of the things you design or develop will be unique, and seem impossible to estimate, by consistently logging your time you will generally find that your ballpark estimates become more accurate as you make yourself aware of how long things really take.

Respect

Aaron Draplin will be bringing tales from his career in design to Toronto, and responded with the thought that it comes down to respect for your colleague’s craft:

“It all comes down to respect for your colleague’s craft, and sort of knowing your place and precisely where you fit into the project. When working with a developer, I surrender to them in a creative way, and then, defuse whatever power play they might try to make on me by leading the charges with constructive design advice, lightning-fast email replies and generally keeping the spirit upbeat. It’s an odd offense to play. I’m not down with the adversarial stuff. I’m quick to remind them we are all in the same boat, and, who’s paying their paycheck. And that’s not me. It’s the client. I’ll forever be on their team, you know? We make the stuff for the client. Not just me. Not ‘my team’. We do it together. This simple methodology has always gone a long way for me.”

I love this, it underpins everything that this article discusses. Think back to any working relationship that has gone bad, how many of those involved you feeling as if the other person just didn’t understand your point of view or the things you believe are important? Most reasonable people understand that compromise has to be made, it is when it appears that your point of view is not considered that frustration sets in.

There are sometimes situations where a decision is being made, and your experience tells you it is going to result in a bad outcome for the project, yet you are overruled. On a few occasions, decisions were made that I believed so poor; I asked for the decision and our objection to it be put in writing, in order that we could not be held accountable for any bad outcome in future. This is not something you should feel the need to do often, however, it is quite powerful and sometimes results in the decision being reversed. An example would be of a client who keeps insisting on doing something that would cause an accessibility problem for a section of their potential audience. If explaining the issue does not help, and the client insists on continuing, ask for that decision in writing in order to document your professional advice.

Learning The Language

I recently had the chance to bring my CSS Layout Workshop not to my usual groups of front-end developers but instead to a group of UX designers. Many of the attendees were there not to improve their front-end development skills, but more to understand enough of how modern CSS Layout worked that they could have better conversations with the developers who built their designs. Many of them had also spent years being told that certain things were not possible on the web, but were realizing that the possibilities in CSS were changing through things like CSS Grid. They were learning some CSS not necessarily to become proficient in shipping it to production, but so they could share a common language with developers.

There are often debates on whether “designers should learn to code.” In reality, I think we all need to learn something of the language, skills, and priorities of the other people on our teams. As Aaron reminded us, we are all on the same team, we are making stuff together. Designers should learn something about code just as developers should also learn something of design. This gives us more of a shared language and understanding.

Seb Lee-Delisle, who will speak on the subject of Hack to the Future in Toronto, agrees:

“I have basically made a career out of being both technical and creative so I strongly feel that the more crossover the better. Obviously what I do now is wonderfully free of the constraints of client work but even so, I do think that if you can blur those edges, it’s gonna be good for you. It’s why I speak at design conferences and encourage designers to play with creative coding, and I speak at tech conferences to persuade coders to improve their visual acuity. Also with creative coding. :) It’s good because not only do I get to work across both disciplines, but also I get to annoy both designers and coders in equal measure.”

I have found that introducing designers to browser DevTools (in particular the layout tools in Firefox and also to various code generators on the web) has been helpful. By being able to test ideas out without writing code, helps a designer who isn’t confident in writing code to have better conversations with their developer colleagues. Playing with tools such as gradient generators, clip-path or animation tools can also help designers see what is possible on the web today.

Animista has demos of different styles of animation

We are also seeing a number of tools that can help people create websites in a more visual way. Developers can sometimes turn their noses up about the code output of such tools, and it’s true they probably won’t be the best choice for the production code of a large project. However, they can be an excellent way for everyone to prototype ideas, without needing to write code. Those prototypes can then be turned into robust, permanent and scalable versions for production.

An important tip for developers is to refrain from commenting on the code quality of prototypes from members of the team who do not ship production code! Stick to what the prototype is showing as opposed to how it has been built.

A Practical Suggestion To Make Things Visual

Eva-Lotta Lamm will be speaking in Toronto about Sketching and perhaps unsurprisingly passed on practical tips for helping conversation by visualizing the problem to support a conversation.

Creating a shared picture of a problem or a solution is a simple but powerful tool to create understanding and make sure they everybody is talking about the same thing.

Visualizing a problem can reach from quick sketches on a whiteboard to more complex diagrams, like customer journey diagrams or service blueprints.

But even just spatially distributing words on a surface adds a valuable layer of meaning. Something as simple as arranging post-its on a whiteboard in different ways can help us to see relationships, notice patterns, find gaps and spot outliers or anomalies. If we add simple structural elements (like arrows, connectors, frames, and dividers) and some sketches into the mix, the relationships become even more obvious.

Visualising a problem creates context and builds a structural frame that future information, questions, and ideas can be added to in a ‘systematic’ way.

Visuals are great to support a conversation, especially when the conversation is ‘messy’ and several people involved.

When we visualize a conversation, we create an external memory of the content, that is visible to everybody and that can easily be referred back to. We don’t have to hold everything in our mind. This frees up space in everybody’s mind to think and talk about other things without the fear of forgetting something important. Visuals also give us something concrete to hold on to and to follow along while listening to complex or abstract information.

When we have a visual map, we can point to particular pieces of content — a simple but powerful way to make sure everybody is talking about the same thing. And when referring back to something discussed earlier, the map automatically reminds us of the context and the connections to surrounding topics.

When we sketch out a problem, a solution or an idea the way we see it (literally) changes. Every time we express a thought in a different medium, we are forced to shape it in a specific way, which allows us to observe and analyze it from different angles.

Visualising forces us to make decisions about a problem that words alone don’t. We have to decide where to place each element, decide on its shape, size, its boldness, and color. We have to decide what we sketch and what we write. All these decisions require a deeper understanding of the problem and make important questions surface fairly quickly.

All in all, supporting your collaboration by making it more visual works like a catalyst for faster and better understanding.

Working in this way is obviously easier if your team is working in the same room. For distributed teams and freelancers, there are alternatives to communicate in ways other than words, e.g. by making a quick Screencast to demonstrate an issue, or even sketching and photographing a diagram can be incredibly helpful. There are collaborative tools such as Milanote, Mural, and Niice; such tools can help with the process Eva-Lotta described even if people can’t be in the same room.

Niice helps you to collect and discuss ideas

I’m very non-visual and have had to learn how useful these other methods of communication are to the people I work with. I have been guilty on many occasions of forgetting that just because I don’t personally find something useful, it is still helpful to other people. It is certainly a good idea to change how you are trying to communicate an idea if it becomes obvious that you are talking at cross-purposes.

Over To You

As with most things, there are many ways to work together. Even for remote teams, there is a range of tools which can help break down barriers to collaborating in a more visual way. However, no tool is able to fix problems caused by a lack of respect for the work of the rest of the team. A good relationship starts with the ability for all of us to take a step back from our strongly held opinions, listen to our colleagues, and learn to compromise. We can then choose tools and workflows which help to support that understanding that we are all on the same team, all trying to do a great job, and all have important viewpoints and experience to bring to the project.

I would love to hear your own experiences working together in the same room or remotely. What has worked well — or not worked at all! Tools, techniques, and lessons learned are all welcome in the comments. If you would be keen to see tutorials about specific tools or workflows mentioned here, perhaps add a suggestion to our User Suggestions board, too.

(il)
Categories: Web Design

On Failures And Successes: Meet SmashingConf Freiburg 2018

Smashing Magazine - Tue, 04/24/2018 - 03:00
On Failures And Successes: Meet SmashingConf Freiburg 2018 On Failures And Successes: Meet SmashingConf Freiburg 2018 Vitaly Friedman 2018-04-24T12:00:09+02:00 2018-04-24T13:06:26+00:00

Everybody loves speaking about successes, but nobody can succeed without failing big time along the way. It’s through mistakes that we grow and get smarter. So for the upcoming SmashingConf Freiburg 2018 (Sept. 10–11), we want to put these stories into focus for a change and explore practical techniques and strategies learned in real projects — the hard way. Aarron Walter, Josh Clark, Tammy Everts, Morten Rand-Hendriksen & many others. Sept 10–11. Early-Birds are available now →

One track, two days, honest talks, live sessions, and a handful of practical workshops. That’s SmashingConf Freiburg 2018! Excited yet?

The night before the conference we’ll be hosting a FailNight — a warm-up party with a twist. Every session will be highlighting how we all failed on a small or big scale, and what we all can learn from it. With talks from the community, for the community. Sounds like fun? Well, it will be!

Speakers

As usual, one track, two conference days (Sept. 10–11), 12 speakers, and just 260 available seats. The conference will cover everything from efficient design workflow to design systems and copywriting, multi-cultural designs, designing for mobile and other fields that may come up in your day-to-day work.

First confirmed speakers include:

Aarron Walter and Tammy Everts are two of the first confirmed speakers.
  • Conference
  • Conf + Workshop
Conference Tickets€499Get Your Ticket

Two days of great speakers and networking
Check all speakers →

Conf + Workshop TicketsSave €100 Conf + Workshop

Three days full of learning and networking
Check all workshops →

Workshops At SmashingConf Freiburg

Our workshops give you the opportunity to spend a full day on the topic of your choice. Tickets for the full-day workshops cost €399. If you buy a workshop ticket in combination with a conference ticket, you’ll save €100 on the regular workshop ticket price. Seats are limited

Workshops on Wednesday, September 12th

Josh Clark on Design For What’s Next
Spend a day exploring the web’s emerging interactions and how you can put them to work today. Your guide is designer Josh Clark, author of Designing for Touch and ambassador of the near future. As you move into newer design tools — speech, bots, physical interfaces, artificial intelligence, and more — you’ll learn the tools and techniques for prototyping and launching these new interfaces and get answers to foundational questions for all your projects. Read more…

Seb Lee-Delisle on JavaScript Graphics And Animation
In this workshop, Seb will demonstrate a variety of beautiful visual effects using JavaScript and HTML5 canvas. You will learn animation and graphics techniques that you can use to add a sense of dynamism to your projects. Seb demystifies programming and explores its artistic possibilities. His presentations and workshops enable artists to overcome their fear of code and encourage programmers of all backgrounds to be more creative and imaginative. Read more…

Vitaly Friedman on Dirty Little Tricks From The Dark Corners Of eCommerce
In this workshop, Vitaly will use real-life examples as a case study and examine refinements of the interface on spot. You’ll set up a very clear roadmap on how you can do the right things in the right order to improve conversion and customer experience. That means removing distractions, minimizing friction and avoiding disruptions and dead ends caused by the interface. Read more…

Location

As always, the Historical Merchants’ Hall located right in the heart of our hometown Freiburg will be the home of SmashingConf Freiburg. First mentioned in 1378 and having retained its present-day form since 1520, the “Kaufhaus” is a symbol of the importance of trade in medieval Freiburg, and, well, its beautiful architecture still blows our audience away each year anew.

The “Kaufhaus” (Historical Merchants’ Hall) will be our Freiburg venue also this time around. (Image credit: John Davey) Why This Conference Could Be For You

Each SmashingConf is a friendly and intimate experience. A cozy get-together of likeminded people who share their stories, their ideas, their hard-learned lessons. At SmashingConf Freiburg you will learn how to:

  1. Use production-ready CSS Grid layouts,
  2. Performance audits,
  3. Recognize, revise, and resolve dark patterns and misleading copy in your own products,
  4. Design and build a product with a global audience in mind,
  5. Extract action-oriented insights from real user data,
  6. Create better e-commerce experiences,
  7. Create responsible machine-learning applications,
  8. Get leading design right,
  9. … and a lot more.
Download “Convince Your Boss” PDF

You need to convince your boss to send you to Freiburg? No worries, we’ve prepared a neat Convince Your Boss PDF that you can use to tip the scales in your favor. Fingers crossed.

Diversity And Inclusivity

SmashingConfs are a safe, friendly place. We care about diversity and inclusivity at our events and don’t tolerate any disrespect. We also provide student and diversity tickets.

See You In Freiburg!

We’d love you to join us for two memorable days, lots of learning, sharing, and inspiring conversations with friendly people, of course. See you there!

(ms, cm, il)
Categories: Web Design

Notifications in Laravel

Tuts+ Code - Web Development - Mon, 04/23/2018 - 05:00

In this article, we're going to explore the notification system in the Laravel web framework. The notification system in Laravel allows you to send notifications to users over different channels. Today, we'll discuss how you can send notifications over the mail channel.

Basics of Notifications

During application development, you often need to notify users about different state changes. It could be either sending email notifications when the order status is changed or sending an SMS about their login activity for security purposes. In particular, we're talking about messages that are short and just provide insight into the state changes.

Laravel already provides a built-in feature that helps us achieve something similar—notifications. In fact, it makes sending notification messages to users a breeze and a fun experience!

The beauty of that approach is that it allows you to choose from different channels notifications will be sent on. Let's quickly go through the different notification channels supported by Laravel.

  • Mail: The notifications will be sent in the form of email to users.
  • SMS: As the name suggests, users will receive SMS notifications on their phone.
  • Slack: In this case, the notifications will be sent on Slack channels.
  • Database: This option allows you to store notifications in a database should you wish to build a custom UI to display it.

Among different notification channels, we'll use the mail channel in our example use-case that we're going to develop over the course of this tutorial.

In fact, it'll be a pretty simple use-case that allows users of our application to send messages to each user. When users receive a new message in their inbox, we'll notify them about this event by sending an email to them. Of course, we'll do that by using the notification feature of Laravel!

Create a Custom Notification Class

As we discussed earlier, we are going to set up an application that allows users of our application to send messages to each other. On the other hand, we'll notify users when they receive a new message from other users via email.

In this section, we'll create necessary files that are required in order to implement the use-case that we're looking for.

To start with, let's create the Message model that holds messages sent by users to each other.

$php artisan make:model Message --migration

We also need to add a few fields like to, from and message to the messages table. So let's change the migration file before running the migrate command.

<?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateMessagesTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('messages', function (Blueprint $table) { $table->increments('id'); $table->integer('from', FALSE, TRUE); $table->integer('to', FALSE, TRUE); $table->text('message'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('messages'); } }

Now, let's run the migrate command that creates the messages table in the database.

$php artisan migrate

That should create the messages table in the database.

Also, make sure that you have enabled the default Laravel authentication system in the first place so that features like registration and login work out of the box. If you're not sure how to do that, the Laravel documentation provides a quick insight into that.

Since each notification in Laravel is represented by a separate class, we need to create a custom notification class that will be used to notify users. Let's use the following artisan command to create a custom notification class—NewMessage.

$php artisan make:notification NewMessage

That should create the app/Notifications/NewMessage.php class, so let's replace the contents of that file with the following contents.

<?php // app/Notifications/NewMessage.php namespace App\Notifications; use Illuminate\Bus\Queueable; use Illuminate\Notifications\Notification; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Notifications\Messages\MailMessage; use App\User; class NewMessage extends Notification { use Queueable; public $fromUser; /** * Create a new notification instance. * * @return void */ public function __construct(User $user) { $this->fromUser = $user; } /** * Get the notification's delivery channels. * * @param mixed $notifiable * @return array */ public function via($notifiable) { return ['mail']; } /** * Get the mail representation of the notification. * * @param mixed $notifiable * @return \Illuminate\Notifications\Messages\MailMessage */ public function toMail($notifiable) { $subject = sprintf('%s: You\'ve got a new message from %s!', config('app.name'), $this->fromUser->name); $greeting = sprintf('Hello %s!', $notifiable->name); return (new MailMessage) ->subject($subject) ->greeting($greeting) ->salutation('Yours Faithfully') ->line('The introduction to the notification.') ->action('Notification Action', url('/')) ->line('Thank you for using our application!'); } /** * Get the array representation of the notification. * * @param mixed $notifiable * @return array */ public function toArray($notifiable) { return [ // ]; } }

As we're going to use the mail channel to send notifications to users, the via method is configured accordingly. So this is the method that allows you to configure the channel type of a notification.

Next, there's the toMail method that allows you to configure various email parameters. In fact, the toMail method should return the instance of \Illuminate\Notifications\Messages\MailMessage, and that class provides useful methods that allow you to configure email parameters.

Among various methods, the line method allows you to add a single line in a message. On the other hand, there's the action method that allows you to add a call-to-action button in a message.

In this way, you could format a message that will be sent to users. So that's how you're supposed to configure the notification class while you're using the mail channel to send notifications.

At the end, you need to make sure that you implement the necessary methods according to the channel type configured in the via method. For example, if you're using the database channel that stores notifications in a database, you don't need to configure the toMail method; instead, you should implement the toArray method, which formats the data that needs to be stored in a database.

How to Send Notifications

In the previous section, we created a notification class that's ready to send notifications. In this section, we'll create files that demonstrate how you could actually send notifications using the NewMessage notification class.

Let's create a controller file at app/Http/Controllers/NotificationController.php with the following contents.

<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use App\Message; use App\User; use App\Notifications\NewMessage; use Illuminate\Support\Facades\Notification; class NotificationController extends Controller { public function __construct() { $this->middleware('auth'); } public function index() { // user 2 sends a message to user 1 $message = new Message; $message->setAttribute('from', 2); $message->setAttribute('to', 1); $message->setAttribute('message', 'Demo message from user 2 to user 1.'); $message->save(); $fromUser = User::find(2); $toUser = User::find(1); // send notification using the "user" model, when the user receives new message $toUser->notify(new NewMessage($fromUser)); // send notification using the "Notification" facade Notification::send($toUser, new NewMessage($fromUser)); } }

Of course, you need to add an associated route in the routes/web.php file.

Route::get('notify/index', 'NotificationController@index');

There are two ways Laravel allows you to send notifications: by using either the notifiable entity or the Notification facade.

If the entity model class utilizes the Illuminate\Notifications\Notifiable trait, then you could call the notify method on that model. The App\User class implements the Notifiable trait and thus it becomes the notifiable entity. On the other hand, you could also use the Illuminate\Support\Facades\Notification Facade to send notifications to users.

Let's go through the index method of the controller.

In our case, we're going to notify users when they receive a new message. So we've tried to mimic that behavior in the index method in the first place.

Next, we've notified the recipient user about a new message using the notify method on the $toUser object, as it's the notifiable entity.

$toUser->notify(new NewMessage($fromUser));

You may have noticed that we also pass the $fromUser object in the first argument of the __construct method, since we want to include the from username in a message.

On the other hand, if you want to mimic it using the Notification facade, it's pretty easy to do so using the following snippet.

Notification::send($toUser, new NewMessage($fromUser));

As you can see, we've used the send method of the Notification facade to send a notification to a user.

Go ahead and open the URL http://your-laravel-site-domain/notify/index in your browser. If you're not logged in yet, you'll be redirected to the login screen. Once you're logged in, you should receive a notification email at the email address that's attached with the user 1.

You may be wondering how the notification system detects the to address when we haven't configured it anywhere yet. In that case, the notification system tries to find the email property in the notifiable object. And the App\User object class already has that property as we're using the default Laravel authentication system.

However, if you would like to override this behavior and you want to use a different property other than email, you just need to define the following method in your notification class.

public function routeNotificationForMail() { return $this->email_address; }

Now, the notification system should look for the email_address property instead of the email property to fetch the to address.

And that's how to use the notification system in Laravel. That brings us to the end of this article as well!

Conclusion

What we've gone through today is one of the useful, yet least discussed, features in Laravel—notifications. It allows you to send notifications to users over different channels.

After a quick introduction, we implemented a real-world example that demonstrated how to send notifications over the mail channel. In fact, it's really handy in the case of sending short messages about state changes in your application.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Should you have any queries or suggestions, don't hesitate to post them using the feed below!

Categories: Web Design

Redesigning A Digital Interior Design Shop (A Case Study)

Smashing Magazine - Mon, 04/23/2018 - 04:50
Redesigning A Digital Interior Design Shop (A Case Study) Redesigning A Digital Interior Design Shop (A Case Study) Boyan Kostov 2018-04-23T13:50:35+02:00 2018-04-23T12:05:22+00:00

Good products are the result of a continual effort in research and design. And, as it usually turns out, our designs don’t solve the problems they were meant to right away. It’s always about constant improvement and iteration.

I have a client called Design Cafe (let’s call it DC). It’s an innovative interior design shop founded by a couple of very talented architects. They produce bespoke designs for the Indian market and sell them online.

DC approached me two years ago to design a few visual mockups for their website. My scope then was limited to visuals, but I didn’t have the proper foundation upon which to base those visuals, and since I didn’t have an ongoing collaboration with the development team, the final website design did not accurately capture the original design intent and did not meet all of the key user needs.

A year and a half passed and DC decided to come back to me. Their website wasn’t providing the anticipated stream of leads. They came back because my process was good, but they wanted to expand the scope to give it space to scale. This time, I was hired to do the research, planning, visual design and prototyping. This would be a makeover of the old design based on user input and data, and prototyping would allow for easy communication with the development team. I assembled a small team of two: me and a fellow designer, Miroslav Kirov, to help run proper research. In less than two weeks, we were ready to start.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Kick-Off

Useful tip: I always kick off a project by talking to the stakeholders. For smaller projects with one or two stakeholders, you can blend the kick-off and the interview into one. Just make sure it’s no longer than an hour.

Stakeholder Interviews

Our two stakeholders are both domain experts. They have a brick-and-mortar store in the center of Bangalore that attracts a lot of people. Once in there, people are delighted by the way the designs look and feel. Our clients wanted to have a website that conveys the same feeling online and that would make its visitors want to go to the store.

Their main pain points:

  • The website wasn’t responsive.

  • There wasn’t a clear distinction between new, returning and potential clients.

  • DC’s selling points weren’t clearly communicated.

They had future plans for transforming the website into a hub for interior design ideas. And, last but not least, DC wanted to attract fresh design talent.

Defining the Goals

We shortlisted all of our goals for the project. Our main goal was to explain in a clear and appealing manner what DC does for existing and potential clients in a way that engages them to contact DC and go to the store. Some secondary goals were:

  • lower the drop-off rate,

  • capture some customer data,

  • clarify the brand’s message,

  • make the website responsive,

  • explain budgets better,

  • provide decision-making assistance and become an information influencer.

Key Metrics

Our number-one key metric was to convert users to leads who visit the store, which measures the main goal. We needed to improve that by at least 5% initially — a realistic number we decided on with our stakeholders. In order to do that, we needed to:

  • shorten the conversion time (time needed for a user to get in touch with DC),

  • increase the form application rate,

  • increase the overall satisfaction users get from the website.

We would track these metrics by setting up Google Analytics Events once the website is online and by talking with leads who come into the store through the website.

Useful tip: Don’t focus on too many metrics. A handful of your most important ones are enough. Measuring too many things will dilute the results.

Discovery

In order for us to gain the best possible insights, our user interviews had to target both previous and potential clients, but we had to go minimal, so we picked two potential and three existing clients. They were mostly from the IT sector — DC’s main target group. Given our pretty tight schedule, we started with desk research while we waited for all five user interviews to be scheduled.

Useful tip: You need to know who you are designing for and what research has been done before. Stakeholders tell you their story, but you need to compare it to data and to users’ opinions, expectations and needs.

Data

We could reference some Google Analytics data from the website:

  • Most users went to the kitchen, then to the bedroom, then to the living room.

  • The high bounce rate of 80%+ was probably due to a misunderstanding of the brand message and unclear flows and calls to action (CTAs).

  • Traffic was mostly mobile.

  • Most users landed on the home page, 70% of them from ads and 16% directly (mostly returning customers), and the rest were equally divided between Facebook and Google Search.

  • 90% of social media traffic came from Facebook. Expanding brand awareness to Instagram and Twitter could be beneficial.

Competitors

There’s a lot of local competition in the sector. Here were some repeating patterns:

  • video spots and elaborate galleries showing the completed designs with clients discussing their services;

  • attractive design presentations with high-quality photos;

  • targeting of group’s appropriate messages;

  • quizzes for picking styles;

  • big bold typography, less text and more visuals.

Large preview Users

DC’s customers are mostly aged between 28 and 40, with a secondary set in the higher bracket of 38 and 55 who come for their second home. They are IT or business professionals with a mid to high budget. They value good customer experience but are price-conscious and very practical. Because they are mostly families, very often the wives are the hidden dominant decision-maker.

We talked with five users (three existing and two potential customers) and sent out a survey to 20 more (mixing existing and potential customers; see Design Cafe Questionnaire).

User Interviews

Useful tip: Be sure to schedule all of your interviews ahead of time, and plan for more people than you need. Include extreme users along with the mainstreams. Chances are that if something works for an extreme user, it will work for the rest as well. Extremes will also give you insight about edge cases that mainstreams just don’t care about.

All users were confused about the main goal of the website. Some of their opinions:

  • “It lacks a proper flow.”

  • “I need more clarity in the process, especially in terms of timelines.”

  • “I need more educational information about interior design.”

Everyone was pretty well informed about the competition. They had tried other companies before DC. All found out about DC by either a reference, Google, ads or by physically passing by the store. And, boy, did they love the store! They treated it like an Apple Store for interior design. Turns out that DC really did a great job with that.

Useful tip: Negative feedback helps us find opportunities for improvement. But positive feedback is also pretty useful because it helps you identify which parts of the product are worth retaining and building upon.

Personal touch, customer service, prices and quality of materials were their main motivations for choosing DC. People insisted on being able to see the price of every element on a page at any time (the previous design didn’t have prices on the accessories).

We made an interesting but somehow expected discovery about device usage. Mobile devices were used mostly for consumption and browsing, but when it came to ordering, most people opened their laptops.

Surveys

The survey results mostly overlapped with the interviews:

  • Users found DC through different channels, but mainly through referrals.

  • They didn’t quite understand the current state of the website. Most of them had searched for or used other services before DC.

  • All of the surveyed users ordered kitchen designs. Almost all had difficulty choosing the right design style.

  • Most users found the process of designing their own interior hard and were interested in features that could make their choice easier.

Useful tip: Writing good survey questions takes time. Work with a researcher to write them, and schedule double the time you think you’ll need.

Large preview Planning User Journeys Overview

Talking with customers helped us gain useful insight about which scenarios would be most important to them. We made an affinity diagram with everything we collected and started prioritizing and combining items in chunks.

Useful tip: Use a white board to download all of your team’s knowledge, and saturate the board with it. Group everything until you spot patterns. These patterns will help you establish themes and find out the most important pain points.

The result was seven point-of-view problem statements that we decided to design for:

  1. A new customer needs more information about DC because they need proof of credibility.
  2. A returning customer needs quick access to the designs because they don’t want to waste time.
  3. All customers need to be able to browse the designs at any time.
  4. All customers want to browse designs relevant to their tastes, because that will shorten their search time.
  5. Potential leads need a way to get in touch with DC in order to purchase a design.
  6. All customers, once they’ve ordered, need to stay up to date with their order status, because they need to know what they are paying for and when they will be getting it.
  7. All customers want to read case studies about successful projects, because that will reassure them that DC knows its stuff.

Using this list, we came up with design solutions for every journey.

Large preview Onboarding

The previous home page of Design Cafe was confusing. It needed to present more information about the business. The lack of information caused confusion and people were unsure what DC is about. We divided the home page into several sections and designed it so that every section could satisfy the needs of one of our target groups:

  1. For new visitors (the purple flow), we included a short trip through the main unique selling points (USPs) of the service, the way it works, some success stories and an option to start the style quiz.

  2. For returning visitors (the blue flow), who will most likely skip the home page or use it as a waypoint, the hero section and the navigation pointed a way out to browsing designs.

  3. We left a small part at the end of the page (the orange flow) for potential employees, describing what there is to love about DC and a CTA that goes to the careers page.

Large preview

The whole point of the onboarding process was to capture the customer’s attention so that they could continue forward, either directly to the design catalog or through a feature we called the style quiz.

Browsing designs

We made the style quiz to help users narrow down their results.

DC previously had a feature called a 3D builder that we decided to remove. It allowed you to set your room size and then drag-and-drop furniture, windows and doors into the mix. In theory, this sounds good, but in reality people treated it much like a game and expected it to function like a minified version of The Sims’ Build Mode.

The Sims' Build Mode, by Electronic Arts. (Large preview)

Everything made with the 3D builder was ending up completely modified by the designers. The tool was giving people a lot of design power and too many choices. On top of that, supporting it was a huge technical endeavor because it was a whole product on its own.

Compared to it, the style quiz was a relatively simple feature:

  1. It starts out by asking about colors, textures and designs you like.

  2. It continues to ask about room type.

  3. Eventually, it displays a curated list of designs based on your answers.

Large preview

The whole quiz wizard extends to only four steps and takes less than a minute to complete. But it makes people invest a tad bit of their time, thus creating engagement. The result: We’re improving conversion time and overall satisfaction.

Alternatively, users can skip the style quiz and go directly to the design catalog, then use the filters to fine-tune the results. The page automatically shows kitchen designs, what most people are looking for. And for the price-conscious, we made a small feature that allows them to input their room’s size, and all prices are recalculated.

Large preview

If people don’t like anything from the catalog, chances are they are not DC’s target customer and there’s not much we can do to keep them on the website. But if they do like a design, they could decide to go forward and get in touch with DC, which brings us to the next step in the process.

Getting in Touch

Contacting DC needed to be as simple as possible. We implemented three ways to do that:

  • through the chat, shown on every page — the quickest way;

  • by opening the contact page and filling out the form or by just calling DC on the phone;

  • by clicking “Book a consultation” in the header, which asks for basic information and requests an appointment (upon submission, the next steps are shown to let users know what exactly is going to happen).

Large preview

The rest of this journey continues offline: Potential customers meet a DC designer and, after some discussions and planning, place an order. DC notifies them of any progress via email and sends them a link to the progress tracker.

Order Status

The progress tracker is in a user menu in the top-right corner of the design. Its goal is to show a timeline of the order. Upon an update, an “unread” notification pops out. Most users, however, will usually find out about order updates through email, so the entry point for the whole flow will be external.

Large preview

Once the interior design order is installed and ready, users will have the completed order on the website for future reference. Their project could be featured on the home page and become part of the case studies.

Case Studies

One of DC’s long-term goals is for its website to become an influencer hub for interior design, filled with case studies, advice and tips. It’s part of a commitment to providing quality content. But DC doesn’t have that content yet. So, we decided to start that section with minimal effort and introduce it as a blog. The client would gradually fill it up with content and detailed process walkthroughs. These would be later expanded and featured on the home page. Case studies are a feature that could significantly increase brand awareness, though they would take time.

Large preview Preparing for Visual Design

With the critical user journeys all figured out and wireframed, we were ready to delve into visual design.

Data showed that most people open the website on their phones, but interviews proved that most of them were more willing to buy through a computer, rather than a mobile device. Also, desktop and laptop users were more engaged and loyal. So, we decided to design for desktop-first and work down to the smaller (mobile) resolutions from it in code.

Visual Design

We started collecting visual ideas, words and images. Initially, we had a simple word sequence based on our conversations with the client and a mood board with relevant designs and ideas. The main visual features we were after were simplicity, bold typography, nice photos and clean icons.

Useful tip: Don’t follow a certain trend just because everybody else is doing it. Create a thorough mood board of relevant reference designs that approximate the look and feel you’re going after. This look should be in line with your goals and target audience.

Simple, elegant, easy, modern, hip, edgy, brave, quality, understanding, fresh, experience, classy. Mood board. (Large preview)

Our client had already started working on a photo shoot, and the results were great. Stock photography would have ruined everything personal about this website. The resulting photos blended with the big type pretty well and helped with that simple language we were after.

Typography

Initially, we went with a combination of Raleway and Roboto for the typography. Raleway is a great font but a bit overused. The second iteration was Abril Fatface and Raleway for the copy. Abril Fatface resembles the splendor of Didot and made the whole page a lot more heavy and pretentious. It was an interesting direction to explore, but it didn’t resonate with the modern techy feel of DC. The last iteration was Nexa for the titles, which turned out to be the best choice due to its modern and edgy feel, with Lato — both a great fit.

Useful tip: Play around with type variations. List them side by side to see how they compare. Go to Typewolf, MyFonts or a similar website to get inspired. Look for typefaces that make sense for your product. Consider readability and accessibility. Don’t go overboard with your type scale; keep it as minimal as possible. Check out Butterick’s summary of key rules if in doubt.

Large preview Colors

DC already had a color scheme, but they gave us the freedom to experiment. The main colors were tints of cyan, golden and plum (or, rather, a strange kind of bordeaux), but the original hues were too faded and didn’t blend with each other well enough.

Useful tip: If the brand already has colors, test slight variations to see how they fit the overall design. Or remove some of the colors and use only one or two. Try designing your layout in monochrome and then test different color combinations on an already mocked-up design. Check out some other great tips by Wojciech Zieliński in his article “How to Use Colors in UI Design: Practical Tips and Tools”.

Here’s what we decided on in the end:

Large preview

The way we presented all of those type variants and colors was through iterations on the home page.

Initial Mockups

We focused the first visual iteration on getting the main information clearly visible and squeezing the most out of the testimonials and style quiz sections. After some discussion, we figured it was too plain and needed improvement. We made changes to the fonts and icons and modified some sections, shown in iterations 2 and 3 in the image below.

We didn’t have the time to design custom icons, but the NounProject came to the rescue. With the SVG file format, it’s very simple to change whatever you need and mix it with something else. This sped up our work immensely, and with visual iteration number 4, we signed off on the design of the home page. This allowed us to focus on components and use them as LEGO blocks to build the templates.

Large preview Components System

I listed most components (see PDF) in a Sketch artboard to keep them accessible. Whenever the design needed a new pattern, we’d come back to this page and look for ways to reuse elements. Having a visual system in place, even for a small project like this, kept things consistent and simple.

Useful tip: Components, atoms, blocks — no matter what you call them, they are all part of systematic thinking about your design. Design systems help you gain a deeper understanding of your product by urging you to focus on patterns, design principles and design language. If you’re new to this approach, check out Brad Frost’s Atomic Design or Alla Kholmatova’s Design Systems.

Part of the pattern library. (Large preview) Prototyping With Code

Useful tip: Work on a prototype first. You can make a prototype using basic HTML, CSS and JavaScript. Or you can use InVision, Marvel, Adobe XD or even the Sketch app, or your favorite prototyping tool. It doesn’t really matter. The important thing is to realize that only when you prototype will you see how your design will function.

For our prototype, we decided to use code and set up a simple build process to speed up our work.

Picking tools and processes

Gulp automated everything. If you haven’t heard of it, check out Callum Macrae’s awesome guide. Gulp enabled us to handle all of the styles, scripts and templates, and it outputs a ready-to-use minified production version of the code.

Some of the more important Gulp plugins we used were:

  • gulp-postcss
    This allows you to use PostCSS. You can bundle it with plugins like cssnext to get a pretty robust and versatile setup.
  • browser-sync
    This sets up a server and automatically updates the view on every change. You can set it to fire up upon starting “gulp watch”, and everything will be synced up on hitting “Save”.
  • gulp-compile-handlebars
    This is a Handlebars implementation for Gulp. It’s a quick way to create templates and reuse them. Imagine you have a button that stays the same throughout the whole design. It would be a symbol in Sketch. It’s basically the same concept but wrapped in HTML. Whenever you want to use that button, you just include the button template. If you change something in the master template, it propagates the changes to every other button in the design. You do that for everything in the design system, and thus you’re using the same paradigm for both visual design and code. No more static page mockups!
Components and templates

We had to mix atomic CSS with module-based CSS to get the most of both worlds. Atomic CSS handled all of the general styles, while the CSS modules handled edge cases.

In atomic CSS, atoms are immutable CSS classes that do just one thing. We used Tachyons, an atomic toolkit. In Tachyons, every class you apply is a single CSS property. For instance, .b stands for font-weight: bold, and .ttu stands for text-transform: uppercase. A paragraph with bold uppercase text would look like this:

<p class="b ttu">Paragraph</p>

Useful tip: Once you get familiar with atomic CSS, it becomes a blazingly fast way to prototype stuff — and a very systematic one, because it urges you to constantly think about reusability and optimization.

A major benefit of prototyping with code is that you can demo complex interactions. We coded most of our critical journeys this way.

Designing micro-interactions in the browser

Our prototype was so high-fidelity that it became the front-end basis for the actual product — DC used our code and integrated it in their workflow. You can check out the prototype on http://beta.boyankostov.com/2017/designcafe/html (or live on http://designcafe.com).

Useful tip: With HTML prototypes, you will have to decide the level of fidelity you want to achieve. That might get pretty time-consuming if you go too deep. But you can’t really go wrong with that either because as you go deeper and deeper into the code and fine-tune every possible detail, at some point you’ll start delivering the actual product.

Sign-off

Clients, especially small B2C companies, love when you deliver a design solution that they can use immediately. We shipped just that.

Unfortunately, you can’t always predict a project’s pace, and it took several months for our code to be integrated in DC’s workflow. In its current state, this code is ready for testing, and what’s better is that it’s pretty easy to modify. So, if DC decides to conduct some user tests in the future, any changes will be easy to make.

Takeaways
  • Collaborate with other designers whenever possible. When two people are thinking about the same problem, they will deliver better ideas. Take turns in taking notes during interviews, and brainstorm goals, ideas and visuals together.

  • Having a developer on the team is beneficial because everyone gets to do what they are best at. A good developer will spend as little as a few minutes on a JavaScript issue that I would probably need hours to resolve.

  • We shipped a working version of the website, and the client was able to use it right away. If you aren’t able to sign off on the code, try to get as close to the final product as possible, and communicate that visually to your client’s team. Document your design — it’s a deliverable that will be used and abused by everyone, from developers to marketers to in-house designers. Set aside some time to make sure all of your ideas are properly understood by everyone.

  • Scheduling interviews and writing good surveys can be time-consuming. You have to plan ahead and recruit more people than you think you will need. Hire an experienced researcher to work with you on these tasks, and spend some time with your team to identify your goals. Be careful when sourcing participants. Your client can help you find the right people, but you’ll need to stick to participants who meet the right demographics.

  • Schedule enough time for planning. Project goals, processes, and responsibilities should be clear to everyone on your team. You need time to allow for multiple iterations on prototypes, because prototypes improve products quickly. If you don’t want to mess with code, there are various ways to prototype. But even if you do, you don’t need to write flawless code — just write designer’s code. Or, as Alan Cooper once said, “Sometimes the best way for a designer to communicate their vision is to code something up so that their colleagues can interact with the proposed behavior, rather than just see still images. The goal of such code is not the same as the goal of the code that coders write. The code isn’t for deployment, but for design [and] its purpose is different.”

  • Don’t focus on a unique design per se, unless that’s the main feature of your product. Better to spend time on things that matter more. Use frameworks, icons and visual assets where possible, or outsource them to another designer and focus on your core product goals and metrics.

(mb, ra, al,yk, il)
Categories: Web Design

Introduction to the Stimulus Framework

Tuts+ Code - Web Development - Fri, 04/20/2018 - 05:15

There are lots of JavaScript frameworks out there. Sometimes I even start to think that I'm the only one who has not yet created a framework. Some solutions, like Angular, are big and complex, whereas some, like Backbone (which is more a library than a framework), are quite simple and only provide a handful of tools to speed up the development process.

In today's article I would like to present you a brand new framework called Stimulus. It was created by a Basecamp team led by David Heinemeier Hansson, a popular developer who was the father of Ruby on Rails.

Stimulus is a small framework that was never intended to grow into something big. It has its very own philosophy and attitude towards front-end development, which some programmers might like or dislike. Stimulus is young, but version 1 has already been released so it should be safe to use in production. I've played with this framework quite a bit and really liked its simplicity and elegance. Hopefully, you will enjoy it too!

In this post we'll discuss the basics of Stimulus while creating a single-page application with asynchronous data loading, events, state persistence, and other common things.

The source code can be found on GitHub.

Introduction to Stimulus

Stimulus was created by developers at Basecamp. Instead of creating single-page JavaScript applications, they decided to choose a majestic monolith powered by Turbolinks and some JavaScript. This JavaScript code evolved into a small and modest framework which does not require you to spend hours and hours learning all its concepts and caveats.

Stimulus is mostly meant to attach itself to existing DOM elements and work with them in some way. It is possible, however, to dynamically render the contents as well. All in all, this framework is quite different from other popular solutions as, for example, it persists state in HTML, not in JavaScript objects. Some developers may find it inconvenient, but do give Stimulus a chance, as it really may surprise you.

The framework has only three main concepts that you should remember, which are:

  • Controllers: JS classes with some methods and callbacks that attach themselves to the DOM. The attachment happens when a data-controller "magic" attribute appears on the page. The documentation explains that this attribute is a bridge between HTML and JavaScript, just like classes serve as bridges between HTML and CSS. One controller can be attached to multiple elements, and one element may be powered up by multiple controllers.
  • Actions: methods to be called on specific events. They are defined in special data-action attributes.
  • Targets: important elements that can be easily accessed and manipulated. They are specified with the help of data-target attributes.

As you can see, the attributes listed above allow you to separate content from behaviour logic in a very simple and natural way. Later in this article, we will see all these concepts in action and notice how easy it is to read an HTML document and understand what's going on.

Bootstrapping a Stimulus Application

Stimulus can be easily installed as an NPM package or loaded directly via the script tag as explained in the docs. Also note that by default this framework integrates with the Webpack asset manager, which supports goodies like controller autoloading. You are free to use any other build system, but in this case some more work will be needed.

The quickest way to get started with Stimulus is by utilizing this starter project that has Express web server and Babel already hooked up. It also depends on Yarn, so be sure to install it. To clone the project and install all its dependencies, run:

git clone https://github.com/stimulusjs/stimulus-starter.git cd stimulus-starter yarn install

If you'd prefer not to install anything locally, you may remix this project on Glitch and do all the coding right in your browser.

Great—we are all set and can proceed to the next section!

Some Markup

Suppose we are creating a small single-page application that presents a list of employees and loads information like their name, photo, position, salary, birthdate, etc.

Let's start with the list of employees. All the markup that we are going to write should be placed inside the public/index.html file, which already has some very minimal HTML. For now, we will hard-code all our employees in the following way:

<h1>Our employees</h1> <div> <ul> <li><a href="#">John Doe</a></li> <li><a href="#">Alice Smith</a></li> <li><a href="#">Will Brown</a></li> <li><a href="#">Ann Grey</a></li> </ul> </div>

Nice! Now let's add a dash of Stimulus magic.

Creating a Controller

As the official documentation explains, the main purpose of Stimulus is to connect JavaScript objects (called controllers) to the DOM elements. The controllers will then bring the page to life. As a convention, controllers' names should end with a _controller postfix (which should be very familiar to Rails developers).

There is a directory for controllers already available called src/controllers. Inside, you will find a  hello_controller.js file that defines an empty class:

import { Controller } from "stimulus" export default class extends Controller { }

Let's rename this file to employees_controller.js. We don't need to specifically require it because controllers are loaded automatically thanks to the following lines of code in the src/index.js file:

const application = Application.start() const context = require.context("./controllers", true, /\.js$/) application.load(definitionsFromContext(context))

The next step is to connect our controller to the DOM. In order to do this, set a data-controller attribute and assign it an identifier (which is employees in our case):

<div data-controller="employees"> <ul> <!-- your list --> </ul> </div>

That's it! The controller is now attached to the DOM.

Lifecycle Callbacks

One important thing to know about controllers is that they have three lifecycle callbacks that get fired on specific conditions:

  • initialize: this callback happens only once, when the controller is instantiated.
  • connect: fires whenever we connect the controller to the DOM element. Since one controller may be connected to multiple elements on the page, this callback may run multiple times.
  • disconnect: as you've probably guessed, this callback runs whenever the controller disconnects from the DOM element.

Nothing complex, right? Let's take advantage of the initialize() and connect() callbacks to make sure our controller actually works:

// src/controllers/employees_controller.js export default class extends Controller { initialize() { console.log('Initialized') console.log(this) } connect() { console.log('Connected') console.log(this) } }

Next, start the server by running:

yarn start

Navigate to http://localhost:9000. Open your browser's console and make sure both messages are displayed. It means that everything is working as expected!

Adding Events

The next core Stimulus concept is events. Events are used to respond to various user actions on the page: clicking, hovering, focusing, etc. Stimulus does not try to reinvent a bicycle, and its event system is based on generic JS events.

For instance, let's bind a click event to our employees. Whenever this event happens, I would like to call the as yet non-existent choose() method of the employees_controller:

<ul> <li><a href="#" data-action="click->employees#choose">John Doe</a></li> <li><a href="#" data-action="click->employees#choose">Alice Smith</a></li> <li><a href="#" data-action="click->employees#choose">Will Brown</a></li> <li><a href="#" data-action="click->employees#choose">Ann Grey</a></li> </ul>

Probably, you can understand what's going on here by yourself.

  • data-action is the special attribute that binds an event to the element and explains what action should be called.
  • click, of course, is the event's name.
  • employees is the identifier of our controller.
  • choose is the name of the method that we'd like to call.

Since click is the most common event, it can be safely omitted:

<li><a href="#" data-action="employees#choose">John Doe</a></li>

In this case, click will be used implicitly.

Next, let's code the choose() method. I don't want the default action to happen (which is, obviously, opening a new page specified in the href attribute), so let's prevent it:

// src/controllers/employees_controller.js // callbacks here... choose(e) { e.preventDefault() console.log(this) console.log(e) }

e is the special event object that contains full information about the triggered event. Note, by the way, that this returns the controller itself, not an individual link! In order to gain access to the element that acts as the event's target, use e.target.

Reload the page, click on a list item, and observe the result!

Working With the State

Now that we have bound a click event handler to the employees, I'd like to store the currently chosen person. Why? Having stored this info, we can prevent the same employee from being selected the second time. This will later allow us to avoid loading the same information multiple times as well.

Stimulus instructs us to persist state in the Data API, which seems quite reasonable. First of all, let's provide some arbitrary ids for each employee using the data-id attribute:

<ul> <li><a href="#" data-id="1" data-action="employees#choose">John Doe</a></li> <li><a href="#" data-id="2" data-action="click->employees#choose">Alice Smith</a></li> <li><a href="#" data-id="3" data-action="click->employees#choose">Will Brown</a></li> <li><a href="#" data-id="4" data-action="click->employees#choose">Ann Grey</a></li> </ul>

Next, we need to fetch the id and persist it. Using the Data API is very common with Stimulus, so a special this.data object is provided for each controller. With its help, we can run the following methods:

  • this.data.get('name'): get the value by its attribute.
  • this.data.set('name', value): set the value under some attribute.
  • this.data.has('name'): check if the attribute exists (returns a boolean value).

Unfortunately, these shortcuts are not available for the targets of the click events, so we must stick with getAttribute() in their case:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.data.set("current-employee", e.target.getAttribute('data-id')) }

But we can do even better by creating a getter and a setter for the currentEmployee:

// src/controllers/employees_controller.js get currentEmployee() { return this.data.get("current-employee") } set currentEmployee(id) { if (this.currentEmployee !== id) { this.data.set("current-employee", id) } }

Notice how we are using the this.currentEmployee getter and making sure that the provided id is not the same as the already stored one.

Now you may rewrite the choose() method in the following way:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.currentEmployee = e.target.getAttribute('data-id') }

Reload the page to make sure that everything still works. You won't notice any visual changes yet, but with the help of the Inspector tool you'll notice that the ul has the data-employees-current-employee attribute with a value that changes as you click on the links. The employees part in the attribute's name is the controller's identifier and is being added automatically.

Now let's move on and highlight the currently chosen employee.

Using Targets

When an employee is selected, I would like to assign the corresponding element with a .chosen class. Of course, we might have solved this task by using some JS selector functions, but Stimulus provides a neater solution.

Meet targets, which allow you to mark one or more important elements on the page. These elements can then be easily accessed and manipulated as needed. In order to create a target, add a data-target attribute with the value of {controller}.{target_name} (which is called a target descriptor):

<ul data-controller="employees"> <li><a href="#" data-target="employees.employee" data-id="1" data-action="employees#choose">John Doe</a></li> <li><a href="#" data-target="employees.employee" data-id="2" data-action="click->employees#choose">Alice Smith</a></li> <li><a href="#" data-target="employees.employee" data-id="3" data-action="click->employees#choose">Will Brown</a></li> <li><a href="#" data-target="employees.employee" data-id="4" data-action="click->employees#choose">Ann Grey</a></li> </ul>

Now let Stimulus know about these new targets by defining a new static value:

// src/controllers/employees_controller.js export default class extends Controller { static targets = [ "employee" ] // ... }

How do we access the targets now? It's as simple as saying this.employeeTarget (to get the first element) or this.employeeTargets (to get all the elements):

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.currentEmployee = e.target.getAttribute('data-id') console.log(this.employeeTargets) console.log(this.employeeTarget) }

Great! How can these targets help us now? Well, we can use them to add and remove CSS classes with ease based on some criteria:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() this.currentEmployee = e.target.getAttribute('data-id') this.employeeTargets.forEach((el, i) => { el.classList.toggle("chosen", this.currentEmployee === el.getAttribute("data-id")) }) }

The idea is simple: we iterate over an array of targets and for each target compare its data-id to the one stored under this.currentEmployee. If it matches, the element is assigned the .chosen class. Otherwise, this class is removed. You may also extract the if (this.currentEmployee !== id) { condition from the setter and use it in the chosen() method instead:

// src/controllers/employees_controller.js choose(e) { e.preventDefault() const id = e.target.getAttribute('data-id') if (this.currentEmployee !== id) { // <--- this.currentEmployee = id this.employeeTargets.forEach((el, i) => { el.classList.toggle("chosen", id === el.getAttribute("data-id")) }) } }

Looking nice! Lastly, we'll provide some very simple styling for the .chosen class inside the public/main.css:

.chosen { font-weight: bold; text-decoration: none; cursor: default; }

Reload the page once again, click on a person, and make sure that person is being highlighted properly.

Loading Data Asynchronously

Our next task is to load information about the chosen employee. In a real-world application, you would have to set up a hosting provider, a back-end powered by something like Django or Rails, and an API endpoint that responds with JSON containing all the necessary data. But we are going to make things a bit simpler and concentrate on the client side only. Create an employees directory under the public folder. Next, add four files containing data for individual employees:

1.json

{ "name": "John Doe", "gender": "male", "age": "40", "position": "CEO", "salary": "$120.000/year", "image": "https://burst.shopifycdn.com/photos/couple-in-love-at-sunset_373x.jpg" }

2.json

{ "name": "Alice Smith", "gender": "female", "age": "32", "position": "CTO", "salary": "$100.000/year", "image": "https://burst.shopifycdn.com/photos/woman-listening-at-team-meeting_373x.jpg" }

3.json

{ "name": "Will Brown", "gender": "male", "age": "30", "position": "Tech Lead", "salary": "$80.000/year", "image": "https://burst.shopifycdn.com/photos/casual-urban-menswear_373x.jpg" }

4.json

{ "name": "Ann Grey", "gender": "female", "age": "25", "position": "Junior Dev", "salary": "$20.000/year", "image": "https://burst.shopifycdn.com/photos/woman-using-tablet_373x.jpg" }

All photos were taken from the free stock photography by Shopify called Burst.

Our data is ready and waiting to be loaded! In order to do this, we'll code a separate loadInfoFor() method:

// src/controllers/employees_controller.js loadInfoFor(employee_id) { fetch(`employees/${employee_id}.json`) .then(response => response.text()) .then(json => { this.displayInfo(json) }) }

This method accepts an employee's id and sends an asynchronous fetch request to the given URI. There are also two promises: one to fetch the body and another one to display the loaded info (we'll add the corresponding method in a moment).

Utilize this new method inside choose():

// src/controllers/employees_controller.js choose(e) { e.preventDefault() const id = e.target.getAttribute('data-id') if (this.currentEmployee !== id) { this.loadInfoFor(id) // ... } }

Before coding the displayInfo() method, we need an element to actually render the data to. Why don't we take advantage of targets once again?

<!-- public/index.html --> <div data-controller="employees"> <div data-target="employees.info"></div> <ul> <!-- ... --> </ul> </div>

Define the target:

// src/controllers/employees_controller.js export default class extends Controller { static targets = [ "employee", "info" ] // ... }

And now utilize it to display all the info:

// src/controllers/employees_controller.js displayInfo(raw_json) { const info = JSON.parse(raw_json) const html = `<ul><li>Name: ${info.name}</li><li>Gender: ${info.gender}</li><li>Age: ${info.age}</li><li>Position: ${info.position}</li><li>Salary: ${info.salary}</li><li><img src="${info.image}"></li></ul>` this.infoTarget.innerHTML = html }

Of course, you are free to employ a templating engine like Handlebars, but for this simple case that would probably be overkill.

Now reload the page and choose one of the employees. His bio and image should be loaded nearly instantly, which means our app is working properly!

Dynamic List of Employees

Using the approach described above, we can go even further and load the list of employees on the fly rather than hard-coding it.

Prepare the data inside the public/employees.json file:

[ { "id": "1", "name": "John Doe" }, { "id": "2", "name": "Alice Smith" }, { "id": "3", "name": "Will Brown" }, { "id": "4", "name": "Ann Grey" } ]

Now tweak the public/index.html file by removing the hard-coded list and adding a data-employees-url attribute (note that we must provide the controller's name, otherwise the Data API won't work):

<div data-controller="employees" data-employees-url="/employees.json"> <div data-target="employees.info"></div> </div>

As soon as controller is attached to the DOM, it should send a fetch request to build a list of employees. It means that the connect() callback is the perfect place to do this:

// src/controllers/employees_controller.js connect() { this.loadFrom(this.data.get('url'), this.displayEmployees) }

I propose we create a more generic loadFrom() method that accepts a URL to load data from and a callback to actually render this data:

// src/controllers/employees_controller.js loadFrom(url, callback) { fetch(url) .then(response => response.text()) .then(json => { callback.call( this, JSON.parse(json) ) }) }

Tweak the choose() method to take advantage of the loadFrom():

// src/controllers/employees_controller.js choose(e) { e.preventDefault() const id = e.target.getAttribute('data-id') if (this.currentEmployee !== id) { this.loadFrom(`employees/${id}.json`, this.displayInfo) // <--- this.currentEmployee = id this.employeeTargets.forEach((el, i) => { el.classList.toggle("chosen", id === el.getAttribute("data-id")) }) } }

displayInfo() can be simplified as well, since JSON is now being parsed right inside the loadFrom():

// src/controllers/employees_controller.js displayInfo(info) { const html = `<ul><li>Name: ${info.name}</li><li>Gender: ${info.gender}</li><li>Age: ${info.age}</li><li>Position: ${info.position}</li><li>Salary: ${info.salary}</li><li><img src="${info.image}"></li></ul>` this.infoTarget.innerHTML = html }

Remove loadInfoFor() and code the displayEmployees() method:

// src/controllers/employees_controller.js displayEmployees(employees) { let html = "<ul>" employees.forEach((el) => { html += `<li><a href="#" data-target="employees.employee" data-id="${el.id}" data-action="employees#choose">${el.name}</a></li>` }) html += "</ul>" this.element.innerHTML += html }

That's it! We are now dynamically rendering our list of employees based on the data returned by the server.

Conclusion

In this article we have covered a modest JavaScript framework called Stimulus. We have seen how to create a new application, add a controller with a bunch of callbacks and actions, and introduce events and actions. Also, we've done some asynchronous data loading with the help of fetch requests.

All in all, that's it for the basics of Stimulus—it really does not expect you to have some arcane knowledge in order to craft web applications. Of course, the framework will probably have some new features in future, but the developers are not planning to turn it into a huge monster with hundreds of tools. 

If you'd like to find more examples of using Stimulus, you may also check out this tiny handbook. And if you’re looking for additional JavaScript resources to study or to use in your work, check out what we have available in the Envato Market

Did you like Stimulus? Would you be interested in trying to create a real-world application powered by this framework? Share your thoughts in the comments!

As always, I thank you for staying with me and until the next time.

Categories: Web Design

Single-Page React Applications With the React-Router and React-Transition-Group Modules

Tuts+ Code - Web Development - Fri, 04/20/2018 - 05:00

This tutorial will walk you through using the react-router and react-transition-group modules to create multi-page React applications with page transition animations.

Preparing the React AppInstalling the create-react-app Package

If you've ever had the chance to try React, you've probably heard about the create-react-app package, which makes it super easy to start with a React development environment.

In this tutorial, we will use this package to initiate our React app.

So, first of all, make sure you have Node.js installed on your computer. It will also install npm for you.

In your terminal, run npm install -g create-react-app. This will globally install create-react-app on your computer.

Once it is done, you can verify whether it is there by typing create-react-app -V.

Creating the React Project

Now it's time to build our React project. Just run create-react-app multi-page-app. You can, of course, replace multi-page-app with anything you want.

Now, create-react-app will create a folder named multi-page-app. Just type cd multi-page-app to change directory, and now run npm start to initialize a local server.

That's all. You have a React app running on your local server.

Now it's time to clean the default files and prepare our application.

In your src folder, delete everything but App.js and index.js. Then open index.js and replace the content with the code below.

import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render(<App />, document.getElementById('root'));

I basically deleted the registerServiceWorker related lines and also the import './index.css'; line.

Also, replace your App.js file with the code below.

import React, { Component } from 'react'; class App extends Component { render() { return ( <div className="App"> </div> ); } } export default App;

Now we will install the required modules.

In your terminal, type the following commands to install the react-router and react-transition-group modules respectively.

npm install react-router-dom --save

npm install react-transition-group@1.x --save

After installing the packages, you can check the package.json file inside your main project directory to verify that the modules are included under dependencies.

Router Components

There are basically two different router options: HashRouter and BrowserRouter.

As the name implies, HashRouter uses hashes to keep track of your links, and it is suitable for static servers. On the other hand, if you have a dynamic server, it is a better option to use BrowserRouter, considering the fact that your URLs will be prettier.

Once you decide which one you should use, just go ahead and add the component to your index.js file.

import { HashRouter } from 'react-router-dom'

The next thing is to wrap our <App> component with the router component.

So your final index.js file should look like this:

import React from 'react'; import ReactDOM from 'react-dom'; import { HashRouter } from 'react-router-dom' import App from './App'; ReactDOM.render(<HashRouter><App/></HashRouter>, document.getElementById('root'));

If you're using a dynamic server and prefer to use BrowserRouter, the only difference would be importing the BrowserRouter and using it to wrap the <App> component.

By wrapping our <App> component, we are serving the history object to our application, and thus other react-router components can communicate with each other.

Inside <App/> Component

Inside our <App> component, we will have two components named <Menu> and <Content>. As the names imply, they will hold the navigation menu and displayed content respectively.

Create a folder named "components" in your src directory, and then create the Menu.js and Content.js files.

Menu.js

Let's fill in our Menu.js component.

It will be a stateless functional component since we don't need states and life-cycle hooks.

import React from 'react' const Menu = () =>{ return( <ul> <li>Home</li> <li>Works</li> <li>About</li> </ul> ) } export default Menu

Here we have a <ul> tag with <li> tags, which will be our links.

Now add the following line to your Menu component.

import { Link } from 'react-router-dom'

And then wrap the content of the <li> tags with the <Link> component.

The <Link> component is essentially a react-router component acting like an <a> tag, but it does not reload your page with a new target link.

Also, if you style your a tag in CSS, you will notice that the <Link> component gets the same styling.

Note that there is a more advanced version of the <Link> component, which is <NavLink>. This offers you extra features so that you can style the active links.

Now we need to define where each link will navigate. For this purpose, the <Link> component has a to prop.

import React from 'react' import { Link } from 'react-router-dom' const Menu = () =>{ return( <ul> <li><Link to="/">Home</Link></li> <li><Link to="/works">Works</Link></li> <li><Link to="/about">About</Link></li> </ul> ) } export default MenuContent.js

Inside our <Content> component, we will define the Routes to match the Links.

We need the Switch and Route components from react-router-dom. So, first of all, import them.

import { Switch, Route } from 'react-router-dom'

Second of all, import the components that we want to route to. These are the Home, Works and About components for our example. Assuming you have already created those components inside the components folder, we also need to import them.

import Home from './Home'

import Works from './Works'

import About from './About'

Those components can be anything. I just defined them as stateless functional components with minimum content. An example template is below. You can use this for all three components, but just don't forget to change the names accordingly.

import React from 'react' const Home = () =>{ return( <div> Home </div> ) } export default HomeSwitch

We use the <Switch> component to group our <Route> components. Switch looks for all the Routes and then returns the first matching one.

Route

Routes are components calling your target component if it matches the path prop.

The final version of our Content.js file looks like this:

import React from 'react' import { Switch, Route } from 'react-router-dom' import Home from './Home' import Works from './Works' import About from './About' const Content = () =>{ return( <Switch> <Route exact path="/" component={Home}/> <Route path="/works" component={Works}/> <Route path="/about" component={About}/> </Switch> ) } export default Content

Notice that the extra exact prop is required for the Home component, which is the main directory. Using exact forces the Route to match the exact pathname. If it's not used, other pathnames starting with / would also be matched by the Home component, and for each link, it would only display the Home component.

Now when you click the menu links, your app should be switching the content.

Animating the Route Transitions

So far, we have a working router system. Now we will animate the route transitions. In order to achieve this, we will use the react-transition-group module.

We will be animating the mounting state of each component. When you route different components with the Route component inside Switch, you are essentially mounting and unmounting different components accordingly.

We will use react-transition-group in each component we want to animate. So you can have a different mounting animation for each component. I will only use one animation for all of them.

As an example, let's use the <Home> component.

First, we need to import CSSTransitionGroup.

import { CSSTransitionGroup } from 'react-transition-group'

Then you need to wrap your content with it.

Since we are dealing with the mounting state of the component, we enable transitionAppear and set a timeout for it. We also disable transitionEnter and transitionLeave, since these are only valid once the component is mounted. If you are planning to animate any children of the component, you have to use them.

Lastly, add the specific transitionName so that we can refer to it inside the CSS file.

import React from 'react' import { CSSTransitionGroup } from 'react-transition-group' import '../styles/homeStyle.css' const Home = () =>{ return( <CSSTransitionGroup transitionName="homeTransition" transitionAppear={true} transitionAppearTimeout={500} transitionEnter={false} transitionLeave={false}> <div> Home </div> </CSSTransitionGroup> ) } export default Home

We also imported a CSS file, where we define the CSS transitions.

.homeTransition-appear{ opacity: 0; } .homeTransition-appear.homeTransition-appear-active{ opacity: 1; transition: all .5s ease-in-out; }

If you refresh the page, you should see the fade-in effect of the Home component.

If you apply the same procedure to all the other routed components, you will see their individual animations when you change the content with your Menu.

Conclusion

In this tutorial, we covered the react-router-dom and react-transition-group modules. However, there's more to both modules than we covered in this tutorial. Here is a working demo of what was covered.

So, to learn more features, always go through the documentation of the modules you are using.

Over the last couple of years, React has grown in popularity. In fact, we have a number of items in the marketplace that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

Categories: Web Design

What You Need To Know To Increase Mobile Checkout Conversions

Smashing Magazine - Fri, 04/20/2018 - 04:35
What You Need To Know To Increase Mobile Checkout Conversions What You Need To Know To Increase Mobile Checkout Conversions Suzanna Scacca 2018-04-20T13:35:58+02:00 2018-04-20T11:44:03+00:00

Google’s mobile-first indexing is here. Well, for some websites anyway. For the rest of us, it will be here soon enough, and our websites need to be in tip-top shape if we don’t want search rankings to be adversely affected by the change.

That said, responsive web design is nothing new. We’ve been creating custom mobile user experiences for years now, so most of our websites should be well poised to take this on… right?

Here’s the problem: Research shows that the dominant device through which users access the web, on average, is the smartphone. Granted, this might not be the case for every website, but the data indicates that this is the direction we’re headed in, and so every web designer should be prepared for it.

However, mobile checkout conversions are, to put it bluntly, not good. There are a number of reasons for this, but that doesn’t mean that m-commerce designers should take this lying down.

As more mobile users rely on their smart devices to access the web, websites need to be more adeptly designed to give them the simplified, convenient and secure checkout experience they want. In the following roundup, I’m going to explore some of the impediments to conversion in the mobile checkout and focus on what web designers can do to improve the experience.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Why Are Mobile Checkout Conversions Lagging?

According to the data, prioritizing the mobile experience in our web design strategies is a smart move for everyone involved. With people spending roughly 51% of their time with digital media through mobile devices (as opposed to only 42% on desktop), search engines and websites really do need to align with user trends.

Now, while that statistic paints a positive picture in support of designing websites with a mobile-first approach, other statistics are floating around that might make you wary of it. Here’s why I say that: Monetate’s e-commerce quarterly report issued for Q1 2017 had some really interesting data to show.

In this first table, they break down the percentage of visitors to e-commerce websites using different devices between Q1 2016 and Q1 2017. As you can see, smartphone Internet access has indeed surpassed desktop:

Website Visits by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 49.30% 47.50% 44.28% 42.83% 42.83% Smartphone 36.46% 39.00% 43.07% 44.89% 44.89% Other 0.62% 0.39% 0.46% 0.36% 0.36% Tablet 13.62% 13.11% 12.19% 11.91% 11.91%

Monetate’s findings on which devices are used to access in the Internet. (Source)

In this next data set, we can see that the average conversion rate for e-commerce websites isn’t great. In fact, the number has gone down significantly since the first quarter of 2016.

Conversion Rates Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Global 3.10% 2.81% 2.52% 2.94% 2.48%

Monetate’s findings on overall e-commerce global conversion rates (for all devices). (Source)

Even more shocking is the split between device conversion rates:

Conversion Rates by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 4.23% 3.88% 3.66% 4.25% 3.63% Tablet 1.42% 1.31% 1.17% 1.49% 1.25% Other 0.69% 0.35% 0.50% 0.35% 0.27% Smartphone 3.59% 3.44% 3.21% 3.79% 3.14%

Monetate’s findings on the average conversion rates, broken down by device. (Source)

Smartphones consistently receive fewer conversions than desktop, despite being the predominant device through which users access the web.

What’s the problem here? Why are we able to get people to mobile websites, but we lose them at checkout?

In its report from 2017 named “Mobile’s Hierarchy of Needs,” comScore breaks down the top five reasons why mobile checkout conversion rates are so low:

The most common reasons why m-commerce shoppers don’t convert. (Image: comScore) (View large version)

Here is the breakdown for why mobile users don’t convert:

  • 20.2% — security concerns
  • 19.6% — unclear product details
  • 19.6% — inability to open multiple browser tabs to compare
  • 19.3% — difficulty navigating
  • 18.6% — difficulty inputting information.

Those are plausible reasons to move from the smartphone to the desktop to complete a purchase (if they haven’t been completely turned off by the experience by that point, that is).

In sum, we know that consumers want to access the web through their mobile devices. We also know that barriers to conversion are keeping them from staying put. So, how do we deal with this?

10 Ways to Increase Mobile Checkout Conversions In 2018

For most of the websites you’ve designed, you’re not likely to see much of a change in search ranking when Google’s mobile-first indexing becomes official.

Your mobile-friendly designs might be “good enough” to keep your websites at the top of search (to start, anyway), but what happens if visitors don’t stick around to convert? Will Google start penalizing you because your website can’t seal the deal with the majority of visitors? In all honesty, that scenario will only occur in extreme cases, where the mobile checkout is so poorly constructed that bounce rates skyrocket and people stop wanting to visit the website at all.

Let’s say that the drop-off in traffic at checkout doesn’t incur penalties from Google. That’s great… for SEO purposes. But what about for business? Your goal is to get visitors to convert without distraction and without friction. Yet, that seems to be what mobile visitors get.

Going forward, your goal needs to be two-fold:

  • to design websites with Google’s mobile-first mission and guidelines in mind,
  • to keep mobile users on the website until they complete a purchase.

Essentially, this means decreasing the amount of work users have to do and improving the visibility of your security measures. Here is what you can do to more effectively design mobile checkouts for conversions.

1. Keep the Essentials in the Thumb Zone

Research on how users hold their mobile phones is old hat by now. We know that, whether they use the single- or double-handed approach, certain parts of the mobile screen are just inconvenient for mobile users to reach. And when expediency is expected during checkout, this is something you don’t want to mess around with.

For single-handed users, the middle of the screen is the prime playing field:

The good, OK and bad areas for single-handed mobile users. (Image: UX Matters) (View large version)

Although users who cradle their phones for greater stability have a couple options for which fingers to use to interact with the screen, only 28% use their index finger. So, let’s focus on the capabilities of thumb users, which, again, means giving the central part of the screen the most prominence:

The good, OK and bad areas for mobile users that cradle their phones. (Image: UX Matters) (View large version)

Some users hold their phones with two hands. Because the horizontal orientation is more likely to be used for video, this won’t be relevant for mobile checkout. So, pay attention to how much space of that screen is feasibly within reach of the user’s thumb:

The good, OK and bad areas for two-handed mobile users. (Image: UX Matters) (View large version)

In sum, we can use Smashing Magazine’s breakdown of where to focus content, regardless of left-hand, right-hand or two-handed holding of a smartphone:

A summary of where the good, OK and bad zones are on mobile devices. (Image: Smashing Magazine) (View large version)

JCPenney’s website is a good example of how to do this:

JCPenney’s contact form starts midway down the page. (Image: JCPenney) (View large version)

While information is included at the top of the checkout page, the input fields don’t start until just below the middle of it — directly in the ideal thumb zone for users of any type. This ensures that visitors holding their phones in any manner and using different fingers to engage with it will have no issue reaching the form fields.

2. Minimize Content to Maximize Speed

We’ve been taught over and over again that minimal design is best for websites. This is especially true in mobile checkout, where an already slow or frustrating experience could easily push a customer over the edge, when all they want to do is be done with the purchase.

To maximize speed during the mobile checkout process, keep the following tips in mind:

  • Only add the essentials to checkout. This is not the time to try to upsell or cross-sell, promote social media or otherwise distract from the action at hand.
  • Keep the checkout free of all images. The only eye-catching visuals that are really acceptable are trustmarks and calls to action (more on these below).
  • Any text included on the page should be instructional or descriptive in nature.
  • Avoid any special stylization of fonts. The less “wow” your checkout page has, the easier it will be for users to get through the process.

Look to Staples’ website as an example of what a highly simple single-page checkout should look like:

Staples has a single-page checkout with a minimal number of fields to fill out. (Image: Staples) (View large version)

As you can see, Staples doesn’t bog down the checkout process with product images, branding, navigation, internal links or anything else that might (1) distract from the task at hand, or (2) suck resources from the server while it attempts to process your customers’ requests.

Not only will this checkout page be easy to get through, but it will load quickly and without issue every time — something customers will remember the next time they need to make a purchase. By keeping your checkout pages light in design, you ensure a speedy experience in all aspects.

3. Put Them at Ease With Trustmarks

A trustmark is any indicator on a website that lets customers know, “Hey, there’s absolutely nothing to worry about here. We’re keeping your information safe!”

The one trustmark that every m-commerce website should have? An SSL certificate. Without one, the address bar will not display the lock sign or the green https domain name — both of which let customers know that the website has extra encryption.

You can use other trustmarks at checkout as well.

Big Chill includes a RapidSSL trust seal to let customers know its data is encrypted. (Image: Big Chill) (View large version)

While you can use logos from Norton Security, PCI compliance and other security software to let customers know your website is protected, users might also be swayed by recognizable and well-trusted names. When you think about it, this isn’t much different than displaying corporate logos beside customer testimonials or in callouts that boast of your big-name connections. If you can leverage a partnership like the ones mentioned below, you can use the inherent trust there to your benefit.

Take 6pm, which uses a “Login with Amazon” option at checkout:

6pm leverages the Amazon name as a trustmark. (Image: 6pm) (View large version)

This is a smart move for a brand that most definitely does not have the brand-name recognition that a company like Amazon has. By giving customers a convenient option to log in with a brand that’s synonymous with speed, reliability and trust, the company might now become known for those same checkout qualities that Amazon is celebrated for.

Then, there are mobile checkout pages like the one on Sephora:

Sephora uses a trusted payment gateway provider as a trustmark. (Image: Sephora) (View large version)

Sephora also uses this technique of leveraging another brand’s good name in order to build trust at checkout time. In this case, however, it presents customers with two clear options: Check out with us right now, or hop over to PayPal, which will take care of you securely. With security being a major concern that keeps mobile customers from converting, this kind of trustmark and payment method is a good move on Sephora’s part.

4. Provide Easier Editing

In general, never take a visitor (on any device) away from whatever they’re doing on your website. There are already enough distractions online; the last thing they need is for you to point them in a direction that keeps them from converting.

At checkout, however, your customers might feel compelled to do this very thing if they decide they want a different color, size or quantity of an item in their shopping cart. Rather than let them backtrack through the website, give them an in-checkout editing option to keep them in place.

Victoria’s Secret does this well:

Victoria’s Secret doesn’t force users away from checkout to edit items. (Image: Victoria’s Secret) (View large version)

When they first get to the checkout screen, customers will see a list of items they’re about to purchase. When the large “Edit” button beside each item is clicked, a lightbox (shown above) opens with the product’s variations. It’s basically the original product page, just superimposed on top of the checkout. Users can adjust their options and save their changes without ever having to leave the checkout page.

If you find, in reviewing your website’s analytics, that users occasionally backtrack after hitting the checkout (you can see this in the sales funnel), add this built-in editing feature. By preventing this unnecessary movement backwards, you could save yourself lost conversions from confused or distracted customers.

5. Enable Express Checkout Options

When consumers check out on an e-commerce website through a desktop device, it probably isn’t a big deal if they have to input their user name, email address or payment information each time. Sure, if it can be avoided, they’ll find ways around it (like allowing the website to save their information or using a password manager such as LastPass).

But on mobile, re-entering that information is a pain, especially if contact forms aren’t optimized well (more on that below). So, to ease the log-in and checkout process for mobile users, consider ways in which you can simplify the process:

  • Allow for guest checkout.
  • Allow for one-click expedited checkout.
  • Enable one-click sign-in from a trusted source, like Facebook.
  • Enable payment on a trusted payment provider’s website, like PayPal, Google Wallet or Stripe.

One of the nice things about Sephora's already convenient checkout process is that customers can automate the sign-in process going forward with a simple toggle:

Sephora enables return customers to stay signed in, to avoid this during checkout again. (Image: Sephora) (View large version)

When mobile customers are feeling the rush and want to get to the next stage of checkout, Sephora’s auto-sign-in feature would definitely come in handy and encourage customers to buy more frequently from the mobile website.

Many mobile websites wait until the bottom of the login page to tell customers what kinds of options they have for checking out. But rather than surprise them late, Victoria’s Secret displays this information in big bold buttons right at the very top:

Victoria’s Secret simplifies and speeds up checkout by giving three attractive options. (Image: Victoria’s Secret) (View large version)

Customers have a choice of signing in with their account, checking out as a guest or going directly to PayPal. They are not surprised to discover later on that their preferred checkout or payment method isn’t offered.

I also really love how Victoria’s Secret has chosen to do this. There’s something nice about the brightly colored “Sign In” button sitting beside the more muted “Check Out as a Guest” button. For one, it adds a hint of Victoria’s Secret brand colors to the checkout, which is always a nice touch. But the way it’s colored the buttons also makes clear what it wants the primary action to be (i.e. to create an account and sign in).

6. Add Breadcrumbs

When you send mobile customers to checkout, the last thing you want is to give them unnecessary distractions. That’s why the website’s standard navigation bar (or hamburger menu) is typically removed from this page.

Nonetheless, the checkout process can be intimidating if customers don’t know what’s ahead. How many forms will they need to fill out? What sort of information is needed? Will they have a chance to review their order before submitting payment details?

If you’ve designed a multi-page checkout, allay your customers’ fears by defining each step with clearly labeled breadcrumb navigation at the top of the page. In addition, this will give your checkout a cleaner design, reducing the number of clicks and scrolling per page.

Hayneedle has a beautiful example of breadcrumb navigation in action:

Hayneedle’s breadcrumbs are cleanly designed and easy to find. (Image: Hayneedle) (View large version)

You can see that three steps are broken out and clearly labeled. There’s absolutely no question here about what users will encounter in those steps either, which will help put their minds at ease. Three steps seems reasonable enough, and users will have a chance to review the order once more before completing the purchase.

Sephora has an alternative style of “breadcrumbs” in its checkout:

Sephora’s numbered breadcrumbs appear as you complete each section. (Image: Sephora) (View large version)

Instead of placing each “breadcrumb” at the top of the checkout page, Sephora’s customers can see what the next step is, as well as how many more are to come as they work their way through the form.

This is a good option to take if you’d rather not make the top navigation or the breadcrumbs sticky. Instead, you can prioritize the call to action (CTA), which you might find better motivates the customer to move down the page and complete their purchase.

I think both of these breadcrumbs designs are valid, though. So, it might be worth A/B testing them if you’re unsure of which would lead to more conversions for your visitors.

7. Format the Checkout Form Wisely

Good mobile checkout form design follows a pretty strict formula, which isn’t surprising. While there are ways to bend the rules on desktop in terms of structuring the form, the number of steps per page, the inclusion of images and so on, you really don’t have that kind of flexibility on mobile.

Instead, you will need to be meticulous when building the form:

  • Design each field of the checkout form so that it stretches the full width of the website.
  • Limit the fields to only what’s essential.
  • Clearly label each field outside of and above it.
  • Use at least a 16-point-pixel font.
  • Format each field so that it’s large enough to tap into without zooming.
  • Use a recognizable mark to indicate when something is required (like an asterisk).
  • Always let users know when an error has been made immediately after the information has been inputted in a field.
  • Place the call to action at the very bottom of the form.

Because the checkout form is the most important element that moves customers through the checkout process, you can’t afford to mess around with a tried and true formula. If users can’t seamlessly get from top to bottom, if the fields are too difficult to engage with, or if the functionality of the form itself is riddled with errors, then you might as well kiss your mobile purchases (and maybe your purchases in general) goodbye.

Crutchfield shows how to create form fields that are very user-friendly on mobile:

Form fields on the Crutchfield checkout page are large and difficult to miss. (Image: Crutchfield) (View large version)

As you can see, each field is large enough to click on (even with fat fingers). The bold outline around the currently selected field is also a nice touch. For a customer who is multitasking and or distracted by something around them, returning to the checkout form would be much easier with this type of format.

Sephora, again, handles mobile checkout the right way. In this case, I want to draw your attention to the grayed-out “Place Order” button:

Sephora uses the call to action as a guide for customers who haven’t finished the form. (Image: Sephora) (View large version)

The button serves as an indicator to customers that they’re not quite ready to submit their purchase information yet, which is great. Even though the form is beautifully designed — everything is well labeled, the fields are large, and the form is logically organized — mobile users could accidentally scroll too far past a field and wouldn’t know it until clicking the call-to-action button.

If you can keep users from receiving that dreaded “missing information” error, you’ll do a better job of holding onto their purchases.

8. Simplify Form Input

Digging a bit deeper into these contact forms, let’s look at how you can simplify the input of data on mobile:

  • Allow customers to user their browser’s autocomplete functionality to fill in forms.
  • Include a tabindex HTML directive to enable customers to tap an arrow up and down through the form. This keeps their thumbs within a comfortable range on the smartphone at all times, instead of constantly reaching up to tap into a new field.
  • Add a checkbox that automatically copies the billing address information over to the shipping fields.
  • Change the keyboard according to what kind of field is being typed in.

One example of this is Bass Pro Shops’ mobile website:

Each field in the Bass Pro checkout form provides users with the right keyboard type. (Image: Bass Pro Shops) (View large version)

For starters, the keyboard uses tab functionality (see the up and down arrows just above the keyboard). For customers with short fingers or who are impatient and just want to type away on the keyboard, the tabs help keep their hands in one place, thus speeding up checkout.

Also, when customers tab into a numbers-only field (like for their phone number), the keyboard automatically changes, so they don’t have to switch manually. Again, this is another way to up the convenience of making a purchase on mobile.

Amazon’s mobile checkout includes a quick checkbox that streamlines customers’ submission of billing information:

Amazon gives customers an easy way to duplicate their shipping address to billing. (Image: Amazon) (View large version)

As we’ve seen with mobile checkout form design, simpler is always better. Obviously, you will always need to collect certain details from customers each time (unless their account has saved that information). Nonetheless, if you can provide a quick toggle or checkbox that enables them to copy data over from one form to another, then do it.

9. Don’t Skimp on the CTA

When designing a desktop checkout, your main concerns with the CTA are things like strategic placement of the button and choosing an eye-catching color to draw attention to it.

On mobile, however, you have to think about size, too — and not just how much space it takes up on the screen. Remember the thumb zone and the various ways in which users hold their phone. Ensure that the button is wide enough so that any user can easily click on it without having to change their hand position.

So, your goal should be to design buttons that (1) sit at the bottom of the mobile checkout page and (2) stretch all the way from left to right, as is the case on Staples’ mobile website:

Staple’s bright blue CTA sticks out in an otherwise plain checkout. (Image: Staples) (View large version)

No matter who is making the purchase — a left-handed, a right-handed or a two-handed cradler — that button will be easy reach.

Of all the mobile checkout enhancements we’ve covered today, the CTA is the easiest one to address. Make it big, give it a distinctive color, place it at the very bottom of the mobile screen, and make it span the full width. In other words, don’t make customers work hard to take the final step in a purchase.

10. Offer an Alternate Way Out

Finally, give customers an alternate way out.

Let’s say they’re shopping on a mobile website, adding items to their cart, but something isn’t sitting right with them, and they don’t want to make the purchase. You’ve done everything you can to assure them along the way with a clean, easy and secure checkout experience, but they just aren’t confident in making a payment on their phone.

Rather than merely hoping you don’t lose the purchase entirely, give them a chance to save it for later. That way, if they really are interested in buying your product, they can revisit on desktop and pull the trigger. It’s not ideal, because you do want to keep them in place on mobile, but the option is good for customers who just can’t be saved.

As you can see on L.L. Bean’s mobile website, there is an option at checkout to “Move to Wish List”:

L.L. Bean gives customers another chance to move items to their wish list during checkout. (Image: L.L. Bean) (View large version)

What’s nice about this is that L.L. Bean clearly doesn’t want browsing of the wish list or the removal of an item to be a primary action. If “Move to Wish List” were shown as a big bold CTA button, more customers might decide to take this seemingly safer alternative. As it’s designed now, it’s more of a, “Hey, we don’t want you to do anything you’re not comfortable with. This is here just in case.”

While fewer options are generally better in web design, this might be something to explore if your checkout has a high cart abandonment rate on mobile.

Wrapping Up

As more mobile visitors flock to your website, every step leading to conversion — including the checkout phase — needs to be optimized for convenience, speed and security. If your checkout is not adeptly designed to mobile users’ specific needs and expectations, you’re going to find that those conversion rates drop or shift back to desktop — and that’s not the direction you want things to go in, especially if Google is pushing us all towards a mobile-first world.

(da, ra, yk, al, il)
Categories: Web Design

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Smashing Magazine - Thu, 04/19/2018 - 03:15
How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial Oleh Mryhlod 2018-04-19T12:15:26+02:00 2018-04-19T14:33:52+00:00

React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let's get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let's look at the project's structure:

Large preview
  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it's important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user's permission for audio recording, which entails access to the microphone. Let's use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:

Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’], audioItems: { ‘id1’: { id: string, title: string, recordDate: date string, duration: number, audioUrl: string, } },

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let's write the handler setAudioMode with Recompose.

withHandlers({ setAudioMode: () => async ({ allowsRecordingIOS }) => { try { await Audio.setAudioModeAsync({ allowsRecordingIOS, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); } catch (error) { console.log(error) // eslint-disable-line } }, }),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers({ recording: null, isRecording: false, durationMillis: 0, isDoneRecording: false, fileUrl: null, audioName: '', }, { setState: () => obj => obj, setAudioName: () => audioName => ({ audioName }), recordingCallback: () => ({ durationMillis, isRecording, isDoneRecording }) => ({ durationMillis, isRecording, isDoneRecording }), }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => { try { if (props.recording) { props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null }); } await props.setAudioMode({ allowsRecordingIOS: true }); const recording = new Audio.Recording(); recording.setOnRecordingStatusUpdate(props.recordingCallback); recording.setProgressUpdateInterval(200); props.setState({ fileUrl: null }); await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY); await recording.startAsync(); props.setState({ recording }); } catch (error) { console.log(error) // eslint-disable-line } },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => { try { await props.recording.stopAndUnloadAsync(); await props.setAudioMode({ allowsRecordingIOS: false }); } catch (error) { console.log(error); // eslint-disable-line } if (props.recording) { const fileUrl = props.recording.getURI(); props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null, fileUrl }); } },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => { if (props.audioName && props.fileUrl) { const audioItem = { id: uuid(), recordDate: moment().format(), title: props.audioName, audioUrl: props.fileUrl, duration: props.durationMillis, }; props.addAudio(audioItem); props.setState({ audioName: '', isDoneRecording: false, }); props.navigation.navigate(screens.LibraryTab); } }, (Large preview) Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:

Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let's write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let's customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle({ async componentDidMount() { await Audio.setAudioModeAsync({ allowsRecordingIOS: false, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); }, }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form { uri: 'http://path/to/file' } with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers({ playbackInstance: null, isSeeking: false, shouldPlayAtEndOfSeek: false, playingAudio: null, }, 'setClassVariable'), withStateHandlers({ position: null, duration: null, shouldPlay: false, isLoading: true, isPlaying: false, isBuffering: false, showPlayer: false, }, { setState: () => obj => obj, }),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers({ soundCallback: props => (status) => { if (status.didJustFinish) { props.playbackInstance().stopAsync(); } else if (status.isLoaded) { const position = props.isSeeking() ? props.position : status.positionMillis; const isPlaying = (props.isSeeking() || status.isBuffering) ? props.isPlaying : status.isPlaying; props.setState({ position, duration: status.durationMillis, shouldPlay: status.shouldPlay, isPlaying, isBuffering: status.isBuffering, }); } }, }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => { props.setState({ isLoading: true }); if (props.playbackInstance() !== null) { await props.playbackInstance().unloadAsync(); props.playbackInstance().setOnPlaybackStatusUpdate(null); props.setClassVariable({ playbackInstance: null }); } const { sound } = await Audio.Sound.create( { uri: props.playingAudio().audioUrl }, { shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 }, props.soundCallback, ); props.setClassVariable({ playbackInstance: sound }); props.setState({ isLoading: false }); },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let's add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => { if (props.playbackInstance() !== null) { if (props.isPlaying) { props.playbackInstance().pauseAsync(); } else { props.playbackInstance().playAsync(); } } },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => { if (props.playbackInstance() !== null) { props.playbackInstance().stopAsync(); props.setShowPlayer(false); props.setClassVariable({ playingAudio: null }); } },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => { if (props.playbackInstance() !== null) { if (props.shouldPlayAtEndOfSeek) { await props.playbackInstance().playFromPositionAsync(value); } else { await props.playbackInstance().setPositionAsync(value); } props.setClassVariable({ isSeeking: false }); } },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let's proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let's add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:

Large preview

Let’s write the video recording logic by using Recompose on the container screen src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won't be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let's add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back, toggleCameraType: state => () => ({ cameraType: state.cameraType === Camera.Constants.Type.front ? Camera.Constants.Type.back : Camera.Constants.Type.front, }),

Let's add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false, setCameraReady: () => () => ({ isCameraReady: true }),

Let's add the variable for saving the camera component reference and setter.

cameraRef: null, setCameraRef: () => cameraRef => ({ cameraRef }),

Let's pass these variables and handlers to the camera component.

<Camera type={cameraType} onCameraReady={setCameraReady} style={s.camera} ref={setCameraRef} />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[''], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640x480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let's set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => { if (props.isCameraReady) { props.setState({ isRecording: true, fileUrl: null }); props.setVideoDuration(); props.cameraRef.recordAsync({ quality: '4:3' }) .then((file) => { props.setState({ fileUrl: file.uri }); }); } },

During the video recording, we can’t receive the recording status as we have done for audio. That's why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => { if (props.isRecording) { props.cameraRef.stopRecording(); props.setState({ isRecording: false }); clearInterval(props.interval); } },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:

Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video source={{ uri: videoUrl }} style={s.video} shouldPlay={isPlaying} resizeMode="contain" useNativeControls={isPlaying} onLoad={onLoad} onError={onError} />
  • source
    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.
  • resizeMode
    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.
  • shouldPlay
    This boolean describes whether the media is supposed to play.
  • useNativeControls
    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.
  • onLoad
    This function is called once the video has been loaded.
  • onError
    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = { isError: false, isLoading: false, isPlaying: false, }; const enhance = compose( paramsToProps('videoUrl'), withStateHandlers({ ...defaultState, isLoading: true, }, { onError: () => () => ({ ...defaultState, isError: true }), onLoad: () => () => defaultState, onTogglePlaying: ({ isPlaying }) => () => ({ ...defaultState, isPlaying: !isPlaying }), }), );

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That's why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

(da, lf, ra, yk, al, il)
Categories: Web Design

Testing in Laravel

Tuts+ Code - Web Development - Wed, 04/18/2018 - 05:00

Irrespective of the application you're dealing with, testing is an important and often overlooked aspect that you should give the attention it deserves. Today, we're going to discuss it in the context of the Laravel web framework.

In fact, Laravel already supports the PHPUnit testing framework in the core itself. PHPUnit is one of the most popular and widely accepted testing frameworks across the PHP community. It allows you to create both kinds of tests—unit and functional.

We'll start with a basic introduction to unit and functional testing. As we move on, we'll explore how to create unit and functional tests in Laravel. I assume that you're familiar with basics of the PHPUnit framework as we will explore it in the context of Laravel in this article.

Unit and Functional Tests

If you're already familiar with the PHPUnit framework, you should know that you can divide tests into two flavors—unit tests and functional tests.

In unit tests, you test the correctness of a given function or a method. More importantly, you test a single piece of your code's logic at a given time.

In your development, if you find that the method you've implemented contains more than one logical unit, you're better off splitting that into multiple methods so that each method holds a single logical and testable piece of code.

Let's have a quick look at an example that's an ideal case for unit testing.

public function getNameAttribute($value) { return ucfirst($value); }

As you can see, the method does one and only one thing. It uses the ucfirst function to convert a title into a title that starts with uppercase.

Whereas the unit test is used to test the correctness of a single logical unit of code, the functional test, on the other hand, allows you to test the correctness of a specific use case. More specifically, it allows you to simulate actions a user performs in an application in order to run a specific use case.

For example, you could implement a functional test case for some login functionality that may involve the following steps.

  • Create the GET request to access the login page.
  • Check if we are on the login page.
  • Generate the POST request to post data to the login page.
  • Check if the session was created successfully.

So that's how you're supposed to create the functional test case. From the next section onward, we'll create examples that demonstrate how to create unit and functional test cases in Laravel.

Setting Up the Prerequisites

Before we go ahead and create actual tests, we need to set up a couple of things that'll be used in our tests.

We will create the Post model and related migration to start with. Go ahead and run the following artisan command to create the Post model.

$php artisan make:model Post --migration

The above command should create the Post model class and an associated database migration as well.

The Post model class should look like:

<?php // app/Post.php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { // }

And the database migration file should be created at database/migrations/YYYY_MM_DD_HHMMSS_create_posts_table.php.

We also want to store the title of the post. Let's revise the code of the Post database migration file to look like the following.

<?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreatePostsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('posts', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('posts'); } }

As you can see, we've added the $table->string('name') column to store the title of the post. Next, you just need to run the migrate command to actually create that table in the database.

$php artisan migrate

Also, let's replace the Post model with the following contents.

<?php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { /** * Get the post title. * * @param string $value * @return string */ public function getNameAttribute($value) { return ucfirst($value); } }

We've just added the accessor method, which modifies the title of the post, and that's exactly what we'll test in our unit test case. That's it as far as the Post model is concerned.

Next, we'll create a controller file at app/Http/Controllers/AccessorController.php. It'll be useful to us when we create the functional test case at a later stage.

<?php // app/Http/Controllers/AccessorController.php namespace App\Http\Controllers; use App\Post; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class AccessorController extends Controller { public function index(Request $request) { // get the post-id from request params $post_id = $request->get("id", 0); // load the requested post $post = Post::find($post_id); // check the name property return $post->name; } }

In the index method, we retrieve the post id from the request parameters and try to load the post model object.

Let's add an associated route as well in the routes/web.php file.

Route::get('accessor/index', 'AccessorController@index');

And with that in place, you can run the http://your-laravel-site.com/accessor/index URL to see if it works as expected.

Unit Testing

In the previous section, we did the initial setup that's going to be useful to us in this and upcoming sections. In this section, we are going to create an example that demonstrates the concepts of unit testing in Laravel.

As always, Laravel provides an artisan command that allows you to create the base template class of the unit test case.

Run the following command to create the AccessorTest unit test case class. It's important to note that we're passing the --unit keyword that creates the unit test case, and it'll be placed under the tests/Unit directory.

$php artisan make:test AccessorTest --unit

And that should create the following class at tests/Unit/AccessorTest.php.

<?php // tests/Unit/AccessorTest.php namespace Tests\Unit; use Tests\TestCase; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class AccessorTest extends TestCase { /** * A basic test example. * * @return void */ public function testExample() { $this->assertTrue(true); } }

Let's replace it with some meaningful code.

<?php // tests/Unit/AccessorTest.php namespace Tests\Unit; use Tests\TestCase; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; use Illuminate\Support\Facades\DB; use App\Post; class AccessorTest extends TestCase { /** * Test accessor method * * @return void */ public function testAccessorTest() { // load post manually first $db_post = DB::select('select * from posts where id = 1'); $db_post_title = ucfirst($db_post[0]->name); // load post using Eloquent $model_post = Post::find(1); $model_post_title = $model_post->name; $this->assertEquals($db_post_title, $model_post_title); } }

As you can see, the code is exactly the same as it would have been in core PHP. We've just imported Laravel-specific dependencies that allow us to use the required APIs. In the testAccessorTest method, we're supposed to test the correctness of the getNameAttribute method of the Post model.

To do that, we've fetched an example post from the database and prepared the expected output in the $db_post_title variable. Next, we load the same post using the Eloquent model that executes the getNameAttribute method as well to prepare the post title. Finally, we use the assertEquals method to compare both variables as usual.

So that's how to prepare unit test cases in Laravel.

Functional Testing

In this section, we'll create the functional test case that tests the functionality of the controller that we created earlier.

Run the following command to create the AccessorTest functional test case class. As we're not using the --unit keyword, it'll be treated as a functional test case and placed under the tests/Feature directory.

$php artisan make:test AccessorTest

It'll create the following class at tests/Feature/AccessorTest.php.

<?php // tests/Feature/AccessorTest.php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; class AccessorTest extends TestCase { /** * A basic test example. * * @return void */ public function testExample() { $this->assertTrue(true); } }

Let's replace it with the following code.

<?php // tests/Feature/AccessorTest.php namespace Tests\Feature; use Tests\TestCase; use Illuminate\Foundation\Testing\WithoutMiddleware; use Illuminate\Foundation\Testing\DatabaseMigrations; use Illuminate\Foundation\Testing\DatabaseTransactions; use Illuminate\Support\Facades\DB; class AccessorTest extends TestCase { /** * A basic test example. * * @return void */ public function testBasicTest() { // load post manually first $db_post = DB::select('select * from lvl_posts where id = 1'); $db_post_title = ucfirst($db_post[0]->name); $response = $this->get('/accessor/index?id=1'); $response->assertStatus(200); $response->assertSeeText($db_post_title); } }

Again, the code should look familiar to those who have prior experience in functional testing.

Firstly, we're fetching an example post from the database and preparing the expected output in the $db_post_title variable. Following that, we try to simulate the /accessor/index?id=1 GET request and grab the response of that request in the $response variable.

Next, we've tried to match the response code in the $response variable with the expected response code. In our case, it should be 200 as we should get a valid response for our GET request. Further, the response should contain a title that starts with uppercase, and that's exactly what we're trying to match using the assertSeeText method.

And that's an example of the functional test case. Now, we have everything we could run our tests against. Let's go ahead and run the following command in the root of your application to run all tests.

$phpunit

That should run all tests in your application. You should see a standard PHPUnit output that displays the status of tests and assertions in your application.

And with that, we're at the end of this article.

Conclusion

Today, we explored the details of testing in Laravel, which already supports PHPUnit in its core. The article started with a basic introduction to unit and functional testing, and as we moved on we explored the specifics of testing in the context of Laravel.

In the process, we created a handful of examples that demonstrated how you could create unit and functional test cases using the artisan command.

If you're just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Don't hesitate to express your thoughts using the feed below!

Categories: Web Design

Which Podcasts Should Web Designers And Developers Be Listening To?

Smashing Magazine - Wed, 04/18/2018 - 04:45
Which Podcasts Should Web Designers And Developers Be Listening To? Which Podcasts Should Web Designers And Developers Be Listening To? Ricky Onsman 2018-04-18T13:45:00+02:00 2018-04-19T14:33:52+00:00

We asked the Smashing community what podcasts they listened to, aiming to compile a shortlist of current podcasts for web designers and developers. We had what can only be called a very strong response — both in number and in passion.

First, we winnowed out the podcasts that were on a broader theme (e.g. creativity, mentoring, leadership), on a narrower theme (e.g. on one specific WordPress theme) or on a completely different theme (e.g. car maintenance — I’m sure it was well-intentioned).

When we filtered out those that had produced no new content in the last three months or more (although then we did have to make some exceptions, as you’ll see), and ordered the rest according to how many times they were nominated, we had a graded shortlist of 55.

Agreed, that’s not a very short shortlist.

So, we broke it down into five more reasonably sized shortlists:

Obviously, it’s highly unlikely anyone could — or would want to — listen to every episode of every one of these podcasts. Still, we’re pretty sure that any web designer or developer will find a few podcasts in this lot that will suit their particular listening tastes.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

A couple of caveats before we begin:

  • We don’t claim to be comprehensive. These lists are drawn from suggestions from readers (not all of which were included) plus our own recommendations.
  • The descriptions are drawn from reader comments, summaries provided by the podcast provider and our own comments. Podcast running times and frequency are, by and large, approximate. The reality is podcasts tend to vary in length, and rarely stick to their stated schedule.
  • We’ve listed each podcast once only, even though several could qualify for more than one list.
  • We’ve excluded most videocasts. This is just for listening (videos probably deserve their own article).
Podcasts For Web Developers Syntax

Wes Bos and Scott Tolinski dive deep into web development topics, explaining how they work and talking about their own experiences. They cover from JavaScript frameworks like React, to the latest advancements in CSS to simplifying web tooling. 30-70 minutes. Weekly.

Developer Tea

A podcast for developers designed to fit inside your tea break, a highly-concentrated, short, frequent podcast specifically for developers who like to learn on their tea (and coffee) break. The Spec Network also produces Design Details. 10-30 minutes. Every two days.

Web Platform Podcast

Covers the latest in browser features, standards, and the tools developers use to build for the web of today and beyond. Founded in 2014 by Erik Isaksen. Hosts Danny, Amal, Leon, and Justin are joined by a special guest to discuss the latest developments. 60 minutes. Weekly.

Devchat Podcasts

Fourteen podcasts with a range of hosts that each explore developments in a specific aspect of development or programming including Ruby, iOS, Angular, JavaScript, React, Rails, security, conference talks, and freelancing. 30-60 minutes. Weekly.

The Bike Shed

Hosts Derek Prior, Sean Griffin, Amanda Hill and guests discuss their development experience and challenges with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire at any particular moment. 30-45 minutes. Weekly.

NodeUp

Hosted by Rod Vagg and a series of occasional co-hosts, this podcast features lengthy discussions with guests and panels about Node.js and Node-related topics. 30-90 minutes. Weekly / Monthly.

.NET Rocks

Carl Franklin and Richard Campbell host an internet audio talk show for anyone interested in programming on the Microsoft .NET platform, including basic information, tutorials, product developments, guests, tips and tricks. 60 minutes. Twice a week.

Three Devs and a Maybe

Join Michael Budd, Fraser Hart, Lewis Cains, and Edd Mann as they discuss software development, frequently joined by a guest on the show’s topic, ranging from daily developer life, PHP, frameworks, testing, good software design and programming languages. 45-60 minutes. Weekly.

Weekly Dev Tips

Hosted by experienced software architect, trainer, and entrepreneur Steve Smith, Weekly Dev Tips offers a variety of technical and career tips for software developers. Each tip is quick and to the point, describing a problem and one or more ways to solve that problem. 5-10 minutes. Weekly.

devMode.fm

Dedicated to the tools, techniques, and technologies used in modern web development. Each episode, Andrew Welch and Patrick Harrington lead a cadre of hosts discussing the latest hotness, pet peeves, and the frontend development technologies we use. 60-90 minutes. Twice a week.

CodeNewbie

Stories from people on their coding journey. New episodes published every Monday. The most supportive community of programmers and people learning to code. Founded by Saron Yitbarek. 30-60 minutes. Weekly.

Front End Happy Hour

A podcast featuring panels of engineers from @Netflix, @Evernote, @Atlassian and @LinkedIn talking over drinks about all things Front End development. 45-60 minutes. Every two weeks.

Under the Radar

From development and design to marketing and support, Under the Radar is all about independent app development. Hosted by David Smith and Marco Arment. 30 minutes. Weekly.

Hanselminutes

Scott Hanselman interviews movers and shakers in technology in this commute-time show. From Michio Kaku to Paul Lutus, Ward Cunningham to Kimberly Bryant, Hanselminutes is talk radio guaranteed not to waste your time. 30 minutes. Weekly.

Fixate on Code

Since October 2017, Larry Botha from South African design agency Fixate has been interviewing well known achievers in web design and development on how to help front end developers write better code. 30 minutes. Weekly.

Podcasts For Web Designers 99% Invisible

Design is everywhere in our lives, perhaps most importantly in the places where we’ve just stopped noticing. 99% Invisible is a weekly exploration of the process and power of design and architecture, from award winning producer Roman Mars. 20-45 minutes. Weekly.

Design Details

A show about the people who design our favorite products, hosted by Bryn Jackson and Brian Lovin. The Spec Network also produces Developer Tea. 60-90 minutes. Weekly.

Presentable

Host Jeffrey Veen brings over two decades of experience as a designer, developer, entrepreneur, and investor as he chats with guests about how we design and build the products that are shaping our digital future and how design is changing the world. 45-60 minutes. Weekly.

Responsive Web Design

In each episode, Karen McGrane and Ethan Marcotte (who coined the term “responsive web design”) interview the people who make responsive redesigns happen. 15-30 minutes. Weekly. (STOP PRESS: Karen and Ethan issued their final episode of this podcast on 26 March 2018.)

RWD Podcast

Host Justin Avery explores new and emerging web technologies, chats with web industry leaders and digs into all aspects of responsive web design. 10-60 minutes. Weekly / Monthly.

UXPodcast

Business, technology and people in digital media. Moving the conversation beyond the traditional realm of User Experience. Hosted by Per Axbom and James Royal-Lawson from Sweden. 30-45 minutes. Every two weeks.

UXpod

A free-ranging set of discussions on matters of interest to people involved in user experience design, website design, and usability in general. Gerry Gaffney set this up to provide a platform for discussing topics of interest to UX practitioners. 30-45 minutes. Weekly / Monthly.

UX-radio

A podcast about IA, UX and Design that features collaborative discussions with industry experts to inspire, educate and share resources with the community. Created by Lara Fedoroff and co-hosted with Chris Chandler. 30-45 minutes. Weekly / Monthly.

User Defenders

Host Jason Ogle aims to highlight inspirational UX Designers leading the way in their craft, by diving deeper into who they are, and what makes them tick/successful, in order to inspire and equip those aspiring to do the same. 30-90 minutes. Weekly.

The Drunken UX Podcast

Our hosts Michael Fienen and Aaron Hill look at issues facing websites and developers that impact the way we all use the web. “In the process, we’ll drink drinks, share thoughts, and hopefully make you laugh a little.” 60 minutes. Twice a week.

UI Breakfast Podcast

Join Jane Portman for conversations about UI/UX design, products, marketing, and so much more, with awesome guests who are industry experts ready to share actionable knowledge. 30-60 minutes. Weekly.

Efficiently Effective

Saskia Videler keeps us up to date with what’s happening in the field of UX and content strategy, aiming to help content experts, UX professionals and others create better digital experiences. 25-40 minutes. Monthly.

The Honest Designers Show

Hosts Tom Ross, Ian Barnard, Dustin Lee and Lisa Glanz have each found success in their creative fields and are here to give struggling designers a completely honest, under the hood look at what it takes to flourish in the modern world. 30-60 minutes. Weekly.

Design Life

A podcast about design and side projects for motivated creators. Femke van Schoonhoven and Charli Prangley (serial side project addicts) saw a gap in the market for a conversational show hosted by two females about design and issues young creatives face. 30-45 minutes. Weekly.

Layout FM

A weekly podcast about design, technology, programming and everything else hosted by Kevin Clark and Rafael Conde. 60-90 minutes. Weekly.

Bread Time

Gabriel Valdivia and Charlie Deets host this micro-podcast about design and technology, the impact of each on the other, and the impact of them both on all of us. 10-30 minutes. Weekly.

The Deeply Graphic DesignCast

Every episode covers a new graphic design-related topic, and a few relevant tangents along the way. Wes McDowell and his co-hosts also answer listener-submitted questions in every episode. 60 minutes. Every two weeks.

Podcasts On The Web, The Internet, And Technology The Big Web Show

Veteran web designer and industry standards champion Jeffrey Zeldman is joined by special guests to address topics like web publishing, art direction, content strategy, typography, web technology, and more. 60 minutes. Weekly.

ShopTalk

A podcast about front end web design, development and UX. Each week Chris Coyier and Dave Rupert are joined by a special guest to talk shop and answer listener submitted questions. 60 minutes. Weekly.

Boagworld

Paul Boag and Marcus Lillington are joined by a variety of guests to discuss a range of web design related topics. Fun, informative and quintessentially British, with content for designers, developers and website owners, something for everybody. 60 minutes. Weekly.

The Changelog

Conversations with the hackers, leaders, and innovators of open source. Hosts Adam Stacoviak and Jerod Santo do in-depth interviews with the best and brightest software engineers, hackers, leaders, and innovators. 60-90 minutes. Weekly.

Back to Front Show

Topics under discussion hosted by Keir Whitaker and Kieran Masterton include remote working, working in the web industry, productivity, hipster beards and much more. Released irregularly but always produced with passion. 30-60 minutes. Weekly / Monthly.

The Next Billion Seconds

The coming “next billion seconds” are the most important in human history, as technology transforms the way we live and work. Mark Pesce talks to some of the brightest minds shaping our world. 30-60 minutes. Every two weeks.

Toolsday

Hosted by Una Kravets and Chris Dhanaraj, Toolsday is about the latest in tech tools, tips, and tricks. 30 minutes. Weekly.

Reply All

A podcast about the internet, often delving deeper into modern life. Hosted by PJ Vogt and Alex Goldman from US narrative podcasting company Gimlet Media. 30-60 minutes. Weekly.

CTRL+CLICK CAST

Diverse voices from industry leaders and innovators, who tackle everything from design, code and CMS, to culture and business challenges. Focused, topical discussions hosted by Lea Alcantara and Emily Lewis. 60 minutes. Every two weeks.

Modern Web

Explores next generation frameworks, standards, and techniques. Hosted by Tracy Lee. Topics include EmberJS, ReactJS, AngularJS, ES2015, RxJS, functional reactive programming. 60 minutes. Weekly.

Relative Paths

A UK based podcast on “web development and stuff like that” for web industry types. Hosted by Mark Phoenix and Ben Hutchings. 60 minutes. Every two weeks.

Business Podcasts For Web Professionals The Businessology Show

The Businessology Show is a podcast about the business of design and the design of business, hosted by CPA/coach Jason Blumer. 30 minutes. Monthly.

CodePen Radio

Chris Coyier, Alex Vazquez, and Tim Sabat, the co-founders of CodePen, talk about the ins and outs of running a small web software business. The good, the bad, and the ugly. 30 minutes. Weekly.

BizCraft

Podcast about the business side of web design, recorded live almost every two weeks. Your hosts are Carl Smith of nGen Works and Gene Crawford of UnmatchedStyle. 45-60 minutes. Every two weeks.

Podcasts That Don’t Have Recent Episodes (But Do Have Great Archives) Design Review Podcast

No chit-chat, just focused in-depth discussions about design topics that matter. Jonathan Shariat and Chris Liu are your hosts and bring to the table passion and years of experience. 30-60 minutes. Every two weeks. Last episode 26 November 2017.

Style Guide Podcast

A small batch series of interviews (20 in total) on Style Guides, hosted by Anna Debenham and Brad Frost, with high profile designer guests. 45 minutes. Weekly. Last episode 19 November 2017.

True North

Looks to uncover the stories of everyday people creating and designing, and highlight the research and testing that drives innovation. Produced by Loop11. 15-60 minutes. Every two weeks. Last episode 18 October 2017

UIE.fm Master Feed

Get all episodes from every show on the UIE network in this master feed: UIE Book Corner (with Adam Churchill) and The UIE Podcast (with Jared Spool) plus some archived older shows. 15-60 minutes. Weekly. Last episode 4 October 2017.

Let’s Make Mistakes

A podcast about design with your hosts, Mike Monteiro, Liam Campbell, Steph Monette, and Seven Morris, plus a range of guests who discuss good design, business and ethics. 45-60 minutes. Weekly / Monthly. Last episode 3 August 2017.

Motion and Meaning

A podcast about motion for digital designers brought to you by Val Head and Cennydd Bowles, covering everything from the basic principles of animation through to advanced tools and techniques. 30 minutes. Monthly. Last episode 13 December 2016.

The Web Ahead

Conversations with world experts on changing technologies and future of the web. The Web Ahead is your shortcut to keeping up. Hosted by Jen Simmons. 60-100 minutes. Monthly. Last episode 30 June 2016.

Unfinished Business

UK designer Andy Clarke and guests have plenty to talk about, mostly on and around web design, creative work and modern life. 60-90 minutes. Monthly. Last episode 28 June 2016. (STOP PRESS: A new episode was issued on 20 March 2018. Looks like it’s back in action.)

Dollars to Donuts

A podcast where Steve Portigal talks with the people who lead user research in their organizations. 50-75 minutes. Irregular. Last episode 10 May 2016.

Any Other Good Ones Missing?

As we noted, there are probably many other good podcasts out there for web designers and developers. If we’ve missed your favorite, let us know about it in the comments, or in the original threads on Twitter or Facebook.

(vf, ra, il)
Categories: Web Design

How To Improve Your Design Process With Data-Based Personas

Smashing Magazine - Tue, 04/17/2018 - 04:40
How To Improve Your Design Process With Data-Based Personas How To Improve Your Design Process With Data-Based Personas Tim Noetzel 2018-04-17T13:40:40+02:00 2018-04-19T14:33:52+00:00

Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says "yes, that resonates" but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data Description of the user including predictive demographics. The percentage of our current user base the persona represents. Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona. The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.
Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

(rb, ra, yk, il)
Categories: Web Design

12 Best Contact Form PHP Scripts

Tuts+ Code - Web Development - Mon, 04/16/2018 - 05:53

Contact forms are a must have for every website. They encourage your site visitors to engage with you while potentially lowering the amount of spam you get. 

For businesses, this engagement with visitors increases the chances of turning them into clients or customers and thus increasing revenue. 

Whether your need is for a simple three-line contact form or a more complex one that offers loads of options and functions, you’re sure to find the right PHP contact form here in our 12 Best Contact Form PHP Scripts on CodeCanyon.

1. Quform- Responsive AJAX Contact Form

There's a reason Quform- Responsive AJAX Contact Form is one of the best-selling PHP contact forms at CodeCanyon. This versatile AJAX contact form can be adapted to be a register form, quote form, or any other form needed. With tons of other customisations available, Quform- Responsive AJAX Contact Form is bound to keep the most discerning user happy.

Best features:

  • three ready-to-use themes with six variations
  • ability to integrate into your own theme design
  • ability to create complex form layouts
  • file uploads supported
  • and more

User DigitalOxide says:

"This script is incredible! It is very detailed instructions, examples and is very fully featured. I can't think of anything I could ever need (as far as forms go) that this script is not able to accomplish!"2. KONTAKTO

KONTAKTO only entered the market in March of 2017 but has already developed a name for itself as one of the top-rated scripts in this category. The standout feature of this beautifully designed contact form is the stylish map with a location pin that comes integrated in the form.

Best features:

  • required field validation
  • anti-spam with simple Captcha math
  • defaults to PHP mail but SMTP option available
  • repeat submission prevention
  • and more

User vholecek says:

"The design is outstanding and the author is very responsive to questions. I got in a little over my head on the deployment of the template and the author had it sorted out in less than 24 hours."3. ContactMe

ContactMe is an incredibly versatile and easily customisable contact form. With 28 ready-to-use styles and 4 different themes, the sky's the limit when it comes to creating the ideal form to fit your needs. 

Best features:

  • easy to install and customise
  • supports both versions of Google reCAPTCHA
  • multiple forms per page allowed
  • four sizes of the form available 
  • and more

User ddglobal says:

"Great plugin for Contact Form. Excellent code, variety & flexibility, incredible fast and outstanding support!"4. PHP Form Builder

Another CodeCanyon top seller, PHP Form Builder includes the jQuery live validation plugin which enables you to build any type of form, connect your database, insert, update or delete records, and send your emails using customisable HTML/CSS templates.

Best features:

  • over 50 prebuilt templates included
  • accepts any HTML5 form elements
  • default options ready for Bootstrap
  • email sending 
  • and more

User sflahaut says:

"Excellent product containing ready to use examples of all types of forms. Documentation is excellent and customer support is exceptional as many others have commented previously. I highly recommend this as it can save a lot of time, especially for developers with not a lot of web experience like myself."5. Contact Framework

Contact Framework has been around for a while, and it’s just gotten better and better with each new update. Its simple yet modern design comes in three themes and five colours, giving you a lot of options for customisation and integration into your site design. 

Best features:

  • ability to attach files
  • supports both versions of Google reCAPTCHA and math CAPTCHA
  • customisable redirect messages
  • customisable notification messages
  • and more 

User fatheaddrummer says:

“Awesome flexibility. The forms look awesome. Outstanding customer support! 6. SLEEK Contact Form

Having made its debut in 2017, SLEEK Contact Form is one of the newest PHP contact form scripts on CodeCanyon. With its simple and stylish design and functionality, it is ideal for creatives or those looking to bring a bit of cool style to their website's contact form.

Best features:

  • invisible Google reCaptcha anti-spam system
  • ability to add attachments of any type
  • automatically get the IP and location of the sender by email
  • easy to modify and implement new fields
  • and more

User thernztrom says:

"Just what I wanted. Good support from author!"7. Ultimate PHP, HTML5 & AJAX Contact Form

The Ultimate PHP, HTML5 and AJAX Contact Form replaces the hugely successful AJAX Contact Form and allows you to easily place and manage a self-contained contact form on any page of your existing PHP website.

Best features:

  • supports file uploads to attach to email
  • field type validation
  • multiple forms per page allowed
  • Google reCAPTCHA capable
  • and more

User geudde says:

"Awesome coding and impeccable documentation. I had the form embedded in my website in less than 10 minutes, and most of that time was spent signing up with Google for reCAPTCHA. I purchased the same author's AJAX form years ago. The new version is so much more elegant and blends seamlessly into my site."8. Perfect Contact Us Form

Perfect Contact Us Form is a Bootstrap-based form which is fully customisable and easy to use. The easy-to-use form integrates well with HTML and PHP pages and will appeal to both beginners and more experienced developers alike.

Best features:

  • AJAX based
  • both SMTP and PHP email script
  • jQuery-based Captcha is included for anti-spam
  • and more

User andreahale says:

"Excellent support and super fast response time. Quickly helped me with the modifications I wanted to make to the form."
9. Contact Form Generator

Contact Form Generator is another of CodeCanyon’s best-selling PHP Contact Form Scripts. It features a user-friendly drag-and-drop interface that helps you build contact forms, feedback forms, online surveys, event registrations, etc., and get responses via email in a matter of minutes. It is a highly effective form builder that enables you to create well-designed contact forms and extend their range to include other functions.

Best features:

  • instant email or SMS notifications
  • custom email auto-responder
  • integrated with MailChimp, AWeber, and five other email subscription services
  • anti-spam protection
  • and more

User Enrico333 says:

"Forms have been a pain for as long as I can recall - this has truly made my life easier."

10. Feedback Form

Really, a feedback form is more limited in its function than a general contact form, but as contact forms can also be used to leave feedback, I thought why not include a bona fide feedback form in this list. 

Feedback Form allows your users to rate your product or service and get the kind of in-depth feedback necessary to improve your business. Feedback Form is super easy to use and can be added to any website in the shortest amount of time. 

Best features: 

  • multi-purpose feedback form 
  • fully customisable 
  • pop-up form (no page reload) 
  • form validation 
  • and more

User diwep06 says:

"Thanks for the Script ... Ultra nice, simple to set up and no skill needed."11. ContactPLUS+ PHP Contact Form

ContactPlus+ is a clean and simple contact form which comes in three styles: an unstyled version that you can build to suit your taste, a normal form with just the essential information needed on a contact form, and a longer form to accommodate an address.

Best features:

  • Captcha verification
  • successful submission message
  • two styled versions and one unstyled version
  • and more

User itscody says:

"He went above and beyond to make sure this worked as I wanted with the overly complicated design of my website."12. Easy Contact Form With Attachments

Though not the prettiest contact form in this list, Easy Contact Form With Attachments is certainly one of the easiest to add to your site. Furthermore, configuration requires just your email address and company info. The form offers five different themes to choose from and, as the name suggests, allows you to send file attachments.

Best features:

  • attachment file size limit can be adjusted up from default of 5MB
  • user friendly with one-click human verification against spam bots
  • optional phone number field and company information
  • error messages can be easily modified
  • and more

User powerj says:

"Excellent support! Great code quality and best customer service!"Conclusion

These 12 Best Contact Form PHP Scripts just scratch the surface of products available at Envato Market, so if none of them fit your needs, there are plenty of other great options you may prefer.

And if you want to improve your PHP skills, check out the ever so useful free PHP tutorials we have on offer.

Categories: Web Design

Getting Started With the Mojs Animation Library: The ShapeSwirl and Stagger Modules

Tuts+ Code - Web Development - Mon, 04/16/2018 - 05:00

The first and second tutorials of this series covered how to animate different HTML elements and SVG shapes using mojs. In this tutorial, we will learn about two more modules which can make our animations more interesting. The ShapeSwirl module allows you to add a swirling motion to any shape that you create. The stagger module, on the other hand, allows you to create and animate multiple shapes at once.

Using the ShapeSwirl Module

The ShapeSwirl module in mojs has a constructor which accepts all the properties of the Shape module. It also accepts some additional properties which allow it to create a swirling motion.

You can specify the amplitude or size of the swirl using the swirlSize property. The oscillation frequency during the swirling motion is determined by the value of the swirlFrequency property. You can also scale down the total path length of the swirling shape using the pathScale property. Valid values for this property range between 0 and 1. The direction of the motion can be specified using the direction property. Keep in mind that direction only has two valid values: -1 and 1. The shapes in a swirling motion will follow a sinusoidal path by default. However, you can animate them along straight lines by setting the value of isSwirl property to false.

Besides these additional properties, the ShapeSwirl module also changes the default value of some properties from the Shape module. The radius of any swirling shape is set to 5 by default. Similarly, the scale property is set to be animated from 1 to 0 in the ShapeSwirl module.

In the following code snippet, I have used all these properties to animate two circles in a swirling motion.

var circleSwirlA = new mojs.ShapeSwirl({ parent: ".container", shape: "circle", fill: "red", stroke: "black", radius: 20, y: { 0: 200 }, angle: { 0: 720 }, duration: 2000, repeat: 10 }); var circleSwirlB = new mojs.ShapeSwirl({ parent: ".container", shape: "circle", fill: "green", stroke: "black", radius: 20, y: { 0: 200 }, angle: { 0: 720 }, duration: 2000, swirlSize: 20, swirlFrequency: 10, isSwirl: true, pathScale: 0.5, repeat: 10 });

In the following demo, you can click on the Play button to animate two circles, a triangle and a cross in a swirling motion. 

Using the Stagger Module

Unlike all other modules that we have discussed so far, stagger is not a constructor. This module is actually a function which can be wrapped around any other module to animate multiple shapes or elements at once. This can be very helpful when you want to apply the same animation sequence on different shapes but still change their magnitude independently.

Once you have wrapped the Shape module inside the stagger() function, you will be able to specify the number of elements to animate using the quantifier property. After that, you can specify the value of all other Shape related properties. It will now become possible for each property to accept an array of values to be applied on individual shapes sequentially. If you want all shapes to have the same value for a particular property, you can just set the property to be equal to that particular value instead of an array of values.

The following example should clarify how the values are assigned to different shapes:

var staggerShapes = mojs.stagger(mojs.Shape); var triangles = new staggerShapes({ quantifier: 5, shape: 'polygon', fill: 'yellow', stroke: 'black', strokeWidth: 5, radius: [20, 30, 40, 50, 60], left: 100, top: 200, x: [{0: 100}, {0: 150}, {0: 200}, {0: 250}, {0: 300}], duration: 2000, repeat: 10, easing: 'quad.in', isYoyo: true, isShowStart: true });

We begin by wrapping the Shape module inside the stagger() function. This allows us to create multiple shapes at once. We have set the value of the quantifier property to 5. This creates five different shapes, which in our case are polygons. Each polygon is a triangle because the default value of the points property is 3. We have already covered all these properties in the second tutorial of the series.

There is only one value of fill, stroke, and strokeWidth. This means that all the triangles will be filled with yellow and will have a black stroke. The width of stroke in each case would be 5px. The value of the radius property, on the other hand, is an array of five integers. Each integer determines the radius of one triangle in the group. The value 20 is assigned to the first triangle, and the value 60 is assigned to the last triangle.

All the properties have had the same initial and final values for the individual triangles so far. This means that none of the properties would be animated. However, the value of the x property is an array of objects containing the initial and final value of the horizontal position of each triangle. The first triangle will translate from x:0 to x:100, and the last triangle will translate from x:0 to x:300. The animation duration in each case would be 2000 milliseconds.

If there is a fixed step between different values of a property, you can also use stagger strings to specify the initial value and the increments. Stagger strings accept two parameters. The first is the start value, which is assigned to the first element in the group. The second value is step, which determines the increase or decrease in value for each successive shape. When only one value is passed to the stagger string, it is considered to be the step, and the start value in this case is assumed to be zero.

The triangle example above could be rewritten as:

var staggerShapes = mojs.stagger(mojs.Shape); var triangles = new staggerShapes({ quantifier: 5, shape: 'polygon', fill: 'yellow', stroke: 'black', strokeWidth: 5, radius: 'stagger(20, 10)', left: 100, top: 200, x: {0: 'stagger(100, 50)'}, duration: 2000, repeat: 10, easing: 'quad.in', isYoyo: true, isShowStart: true });

You can also assign random values to different shapes in a group using rand strings. You just have to supply a minimum and maximum value to a rand string, and mojs will automatically assign a value between these limits to individual shapes in the group.

In the following example, we are using the rand strings to randomly set the number of points for each polygon. You may have noticed that the total number of polygons we are rendering is 25, but the fill array only has four colors. When the array length is smaller than the value of the quantifier, the values for different shapes are determined by continuously cycling through the array until all the shapes have been assigned a value. For example, after assigning the color of the first four polygons, the color of the fifth polygon would be orange, the color of the sixth polygon would be yellow, and so on.

The stagger string sets the radius of the first polygon equal to 10 and then keeps increasing the radius of subsequent polygons by 1. The horizontal position of each polygon is similarly increased by 20, and the vertical position is determined randomly. The final angle value for each polygon is randomly set between -120 and 120. This way, some polygons rotate in a clockwise direction while others rotate in an anti-clockwise direction. The angle animation is also given its own easing function, which is different from the common animation of other properties.

var staggerShapes = mojs.stagger(mojs.Shape); var polygons = new staggerShapes({ quantifier: 25, shape: 'polygon', points: 'rand(3, 6)', fill: ['orange', 'yellow', 'cyan', 'lightgreen'], stroke: 'black', radius: 'stagger(10, 1)', left: 100, top: 100, x: 'stagger(0, 20)', y: 'rand(40, 400)', scale: {1: 'rand(0.1, 3)'}, angle: {0: 'rand(-120, 120)', easing: 'elastic.in'}, duration: 1000, repeat: 10, easing: 'cubic.in', isYoyo: true, isShowStart: true });

Final Thoughts

We covered two more mojs modules in this tutorial. The ShapeSwirl module allows us to animate different shapes in a swirling motion. The stagger module allows us to animate multiple shape elements at once.

Each shape in a stagger group can be animated independently without any interference from other shapes. This makes the stagger module incredibly useful. We also learned how to use stagger strings to assign values with fixed steps to properties of different shapes.

If you have any questions related to this tutorial, please let me know in the comments. We will learn about the Burst module in the next tutorial of this series.

For additional resources to study or to use in your work, check out what we have available in the Envato Market.

Categories: Web Design

Best Practices With CSS Grid Layout

Smashing Magazine - Mon, 04/16/2018 - 04:35
Best Practices With CSS Grid Layout Best Practices With CSS Grid Layout Rachel Andrew 2018-04-16T13:35:19+02:00 2018-04-19T14:33:52+00:00

An increasingly common question — now that people are using CSS Grid Layout in production — seems to be “What are the best practices?” The short answer to this question is to use the layout method as defined in the specification. The particular parts of the spec you choose to use, and indeed how you combine Grid with other layout methods such as Flexbox, is down to what works for the patterns you are trying to build and how you and your team want to work.

Looking deeper, I think perhaps this request for “best practices” perhaps indicates a lack of confidence in using a layout method that is very different from what came before. Perhaps a concern that we are using Grid for things it wasn’t designed for, or not using Grid when we should be. Maybe it comes down to worries about supporting older browsers, or in how Grid fits into our development workflow.

In this article, I’m going to try and cover some of the things that either could be described as best practices, and some things that you probably don’t need to worry about.

The Survey

To help inform this article, I wanted to find out how other people were using Grid Layout in production, what were the challenges they faced, what did they really enjoy about it? Were there common questions, problems or methods being used. To find out, I put together a quick survey, asking questions about how people were using Grid Layout, and in particular, what they most liked and what they found challenging.

In the article that follows, I’ll be referencing and directly quoting some of those responses. I’ll also be linking to lots of other resources, where you can find out more about the techniques described. As it turned out, there was far more than one article worth of interesting things to unpack in the survey responses. I’ll address some of the other things that came up in a future post.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Accessibility

If there is any part of the Grid specification that you need to take care when using, it is when using anything that could cause content re-ordering:

“Authors must use order and the grid-placement properties only for visual, not logical, reordering of content. Style sheets that use these features to perform logical reordering are non-conforming.”

Grid Specification: Re-ordering and Accessibility

This is not unique to Grid, however, the ability to rearrange content so easily in two dimensions makes it a bigger problem for Grid. However, if using any method that allows content re-ordering — be that Grid, Flexbox or even absolute positioning — you need to take care not to disconnect the visual experience from how the content is structured in the document. Screen readers (and people navigating around the document using a keyboard only) are going to be following the order of items in the source.

The places where you need to be particularly careful are when using flex-direction to reverse the order in Flexbox; the order property in Flexbox or Grid; any placement of Grid items using any method, if it moves items out of the logical order in the document; and using the dense packing mode of grid-auto-flow.

For more information on this issue, see the following resources:

Which Grid Layout Methods Should I Use? ”With so much choice in Grid, it was a challenge to stick to a consistent way of writing it (e.g. naming grid lines or not, defining grid-template-areas, fallbacks, media queries) so that it would be maintainable by the whole team.”

Michelle Barker working on wbsl.com

When you first take a look at Grid, it might seem overwhelming with so many different ways of creating a layout. Ultimately, however, it all comes down to things being positioned from one line of the grid to another. You have choices based on the of layout you are trying to achieve, as well as what works well for your team and the site you are building.

There is no right or wrong way. Below, I will pick up on some of the common themes of confusion. I’ve also already covered many other potential areas of confusion in a previous article “Grid Gotchas and Stumbling Blocks.”

Should I Use An Implicit Or Explicit Grid?

The grid you define with grid-template-columns and grid-template-rows is known as the Explicit Grid. The Explicit Grid enables the naming of lines on the Grid and also gives you the ability to target the end line of the grid with -1. You’ll choose an Explicit Grid to do either of these things and in general when you have a layout all designed and know exactly where your grid lines should go and the size of the tracks.

I use the Implicit Grid most often for row tracks. I want to define the columns but then rows will just be auto-sized and grow to contain the content. You can control the Implicit Grid to some extent with grid-auto-columns and grid-auto-rows, however, you have less control than if you are defining everything.

You need to decide whether you know exactly how much content you have and therefore the number of rows and columns — in which case you can create an Explicit Grid. If you do not know how much content you have, but simply want rows or columns created to hold whatever there is, you will use the Implicit Grid.

Nevertheless, it’s possible to combine the two. In the below CSS, I have defined three columns in the Explicit Grid and three rows, so the first three rows of content will be the following:

  • A track of at least 200px in height, but expanding to take content taller,
  • A track fixed at 400px in height,
  • A track of at least 300px in height (but expands).

Any further content will go into a row created in the Implicit Grid, and I am using the grid-auto-rows property to make those tracks at least 300px tall, expanding to auto.

.grid { display: grid; grid-template-columns: 1fr 3fr 1fr; grid-template-rows: minmax(200px auto) 400px minmax(300px, auto); grid-auto-rows: minmax(300px, auto); grid-gap: 20px; } A Flexible Grid With A Flexible Number Of Columns

By using Repeat Notation, autofill, and minmax you can create a pattern of as many tracks as will fit into a container, thus removing the need for Media Queries to some extent. This technique can be found in this video tutorial, and also demonstrated along with similar ideas in my recent article “Using Media Queries For Responsive Design In 2018.”

Choose this technique when you are happy for content to drop below earlier content when there is less space, and are happy to allow a lot of flexibility in sizing. You have specifically asked for your columns to display with a minimum size, and to auto fill.

There were a few comments in the survey that made me wonder if people were choosing this method when they really wanted a grid with a fixed number of columns. If you are ending up with an unpredictable number of columns at certain breakpoints, you might be better to set the number of columns — and redefine it with media queries as needed — rather than using auto-fill or auto-fit.

Which Method Of Track Sizing Should I Use?

I described track sizing in detail in my article “How Big Is That Box? Understanding Sizing In Grid Layout,” however, I often get questions as to which method of track sizing to use. Particularly, I get asked about the difference between percentage sizing and the fr unit.

If you simply use the fr unit as specced, then it differs from using a percentage because it distributes available space. If you place a larger item into a track then the way the fr until will work is to allow that track to take up more space and distribute what is left over.

.grid { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-gap: 20px; } The first column is wider as Grid has assigned it more space.

To cause the fr unit to distribute all of the space in the grid container you need to give it a minimum size of 0 using minmax().

.grid { display: grid; grid-template-columns: minmax(0,1fr) minmax(0,1fr) minmax(0,1fr); grid-gap: 20px; } Forcing a 0 minimum may cause overflows

So you can choose to use fr in either of these scenarios: ones where you do want space distribution from a basis of auto (the default behavior), and those where you want equal distribution. I would typically use the fr unit as it then works out the sizing for you, and enables the use of fixed width tracks or gaps. The only time I use a percentage instead is when I am adding grid components to an existing layout that uses other layout methods too. If I want my grid components to line up with a float- or flex-based layout which is using percentages, using them in my grid layout means everything uses the same sizing method.

Auto-Place Items Or Set Their Position?

You will often find that you only need to place one or two items in your layout, and the rest fall into place based on content order. In fact, this is a really good test that you haven’t disconnected the source and visual display. If things pretty much drop into position based on auto-placement, then they are probably in a good order.

Once I have decided where everything goes, however, I do tend to assign a position to everything. This means that I don’t end up with strange things happening if someone adds something to the document and grid auto-places it somewhere unexpected, thus throwing out the layout. If everything is placed, Grid will put that item into the next available empty grid cell. That might not be exactly where you want it, but sat down at the end of your layout is probably better than popping into the middle and pushing other things around.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Which Positioning Method To Use?

When working with Grid Layout, ultimately everything comes down to placing items from one line to another. Everything else is essentially a helper for that.

Decide with your team if you want to name lines, use Grid Template Areas, or if you are going to use a combination of different types of layout. I find that I like to use Grid Template Areas for small components in particular. However, there is no right or wrong. Work out what is best for you.

Grid In Combination With Other Layout Mechanisms

Remember that Grid Layout isn’t the one true layout method to rule them all, it’s designed for a certain type of layout — namely two-dimensional layout. Other layout methods still exist and you should consider each pattern and what suits it best.

I think this is actually quite hard for those of us used to hacking around with layout methods to make them do something they were not really designed for. It is a really good time to take a step back, look at the layout methods for the tasks they were designed for, and remember to use them for those tasks.

In particular, no matter how often I write about Grid versus Flexbox, I will be asked which one people should use. There are many patterns where either layout method makes perfect sense and it really is up to you. No-one is going to shout at you for selecting Flexbox over Grid, or Grid over Flexbox.

In my own work, I tend to use Flexbox for components where I want the natural size of items to strongly control their layout, essentially pushing the other items around. I also often use Flexbox because I want alignment, given that the Box Alignment properties are only available to use in Flexbox and Grid. I might have a Flex container with one child item, in order that I can align that child.

A sign that perhaps Flexbox isn’t the layout method I should choose is when I start adding percentage widths to flex items and setting flex-grow to 0. The reason to add percentage widths to flex items is often because I’m trying to line them up in two dimensions (lining things up in two dimensions is exactly what Grid is for). However, try both, and see which seems to suit the content or design pattern best. You are unlikely to be causing any problems by doing so.

Nesting Grid And Flex Items

This also comes up a lot, and there is absolutely no problem with making a Grid Item also a Grid Container, thus nesting one grid inside another. You can do the same with Flexbox, making a Flex Item and Flex Container. You can also make a Grid Item and Flex Container or a Flex Item a Grid Container — none of these things are a problem!

What we can’t currently do is nest one grid inside another and have the nested grid use the grid tracks defined on the overall parent. This would be very useful and is what the subgrid proposals in Level 2 of the Grid Specification hope to solve. A nested grid currently becomes a new grid so you would need to be careful with sizing to ensure it aligns with any parent tracks.

You Can Have Many Grids On One Page

A comment popped up a few times in the survey which surprised me, there seems to be an idea that a grid should be confined to the main layout, and that many grids on one page were perhaps not a good thing. You can have as many grids as you like! Use grid for big things and small things, if it makes sense laid out as a grid then use Grid.

Fallbacks And Supporting Older Browsers “Grid used in conjunction with @supports has enabled us to better control the number of layout variations we can expect to see. It has also worked really well with our progressive enhancement approach meaning we can reward those with modern browsers without preventing access to content to those not using the latest technology.”

Joe Lambert working on rareloop.com

In the survey, many people mentioned older browsers, however, there was a reasonably equal split between those who felt that supporting older browsers was hard and those who felt it was easy due to Feature Queries and the fact that Grid overrides other layout methods. I’ve written at length about the mechanics of creating these fallbacks in “Using CSS Grid: Supporting Browsers Without Grid.”

In general, modern browsers are far more interoperable than their earlier counterparts. We tend to see far fewer actual “browser bugs” and if you use HTML and CSS correctly, then you will generally find that what you see in one browser is the same as in another.

We do, of course, have situations in which one browser has not yet shipped support for a certain specification, or some parts of a specification. With Grid, we have been very fortunate in that browsers shipped Grid Layout in a very complete and interoperable way within a short time of each other. Therefore, our considerations for testing tend to be to need to test browsers with Grid and without Grid. You may also have chosen to use the -ms prefixed version in IE10 and IE11, which would then require testing as a third type of browser.

Browsers which support modern Grid Layout (not the IE version) also support Feature Queries. This means that you can test for Grid support before using it.

Testing Browsers That Don’t Support Grid

When using fallbacks for browsers without support for Grid Layout (or using the -ms prefixed version for IE10 and 11), you will want to test how those browsers render Grid Layout. To do this, you need a way to view your site in an example browser.

I would not take the approach of breaking your Feature Query by checking for support of something nonsensical, or misspelling the value grid. This approach will only work if your stylesheet is incredibly simple, and you have put absolutely everything to do with your Grid Layout inside the Feature Queries. This is a very fragile and time-consuming way to work, especially if you are extensively using Grid. In addition, an older browser will not just lack support for Grid Layout, there will be other CSS properties unsupported too. If you are looking for “best practice” then setting yourself up so you are in a good position to test your work is high up there!

There are a couple of straightforward ways to set yourself up with a proper method of testing your fallbacks. The easiest method — if you have a reasonably fast internet connection and don’t mind paying a subscription fee — is to use a service such as BrowserStack. This is a service that enables viewing of websites (even those in development on your computer) on a whole host of real browsers. BrowserStack does offer free accounts for open-source projects.

You can download Virtual Machines for testing from Microsoft.

To test locally, my suggestion would be to use a Virtual Machine with your target browser installed. Microsoft offers free Virtual Machine downloads with versions of IE back to IE8, and also Edge. You can also install onto the VM an older version of a browser with no Grid support at all. For example by getting a copy of Firefox 51 or below. After installing your elderly Firefox, be sure to turn off automatic updates as explained here as otherwise it will quietly update itself!

You can then test your site in IE11 and in non-supporting Firefox on one VM (a far less fragile solution than misspelling values). Getting set up might take you an hour or so, but you’ll then be in a really good place to test your fallbacks.

Unlearning Old Habits “It was my first time to use Grid Layout, so there were a lot of concepts to learn and properties understand. Conceptually, I found the most difficult thing to unlearn all the stuff I had done for years, like clearing floats and packing everything in container divs.”

Hidde working on hiddedevries.nl/en

Many of the people responding to the survey mentioned the need to unlearn old habits and how learning Layout would be easier for people completely new to CSS. I tend to agree. When teaching people in person complete beginners have little problem using Grid while experienced developers try hard to return grid to a one-dimensional layout method. I’ve seen attempts at “grid systems” using CSS Grid which add back in the row wrappers needed for a float or flex-based grid.

Don’t be afraid to try out new techniques. If you have the ability to test in a few browsers and remain mindful of potential issues of accessibility, you really can’t go too far wrong. And, if you find a great way to create a certain pattern, let everyone else know about it. We are all new to using Grid in production, so there is certainly plenty to discover and share.

“Grid Layout is the most exciting CSS development since media queries. It's been so well thought through for real-world developer needs and is an absolute joy to use in production - for designers and developers alike.”

Trys Mudford working on trysmudford.com

To wrap up, here is a very short list of current best practices! If you have discovered things that do or don’t work well in your own situation, add them to the comments.

  1. Be very aware of the possibility of content re-ordering. Check that you have not disconnected the visual display from the document order.
  2. Test using real target browsers with a local or remote Virtual Machine.
  3. Don’t forget that older layout methods are still valid and useful. Try different ways to achieve patterns. Don’t be hung up on having to use Grid.
  4. Know that as an experienced front-end developer you are likely to have a whole set of preconceptions about how layout works. Try to look at these new methods anew rather than forcing them back into old patterns.
  5. Keep trying things out. We’re all new to this. Test your work and share what you discover.
(il)
Categories: Web Design

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Smashing Magazine - Fri, 04/13/2018 - 05:30
Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive Anselm Hannemann 2018-04-13T14:30:19+02:00 2018-04-19T14:33:52+00:00

These days, it is one of the biggest challenges to think long-term. In a world where we live with devices that last only a few months or a few years maybe, where we buy stuff to throw it away only days or weeks later, the term ‘effort’ gains a new meaning.

Recently, I was reading an essay on ‘Yatnah’, ‘Effort’. I spent a lot of time outside in nature in the past weeks and created a small acre to grow some vegetables. I also attended a workshop to learn the craft of grafting fruit trees. When you cut a tree, you realize that our fast-living, short-term lifestyle is very different from how nature works. I grafted a tree that is supposed to grow for decades now, and if you cut a tree that has been there for forty years, it’ll take another forty to grow one that will be similarly tall.

I’d love that we all try to create more long-lasting work, software that works in a decade, and in order to do so, put effort into learning how we can make that happen. So long, I’ll leave you with this quote and a bunch of interesting articles.

“In our modern world it can be tempting to throw effort away and replace it with a few phrases of positive thinking. But there is just no substitute for practice”.

— Kino Macgregor

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → News
  • The Safari Technology Preview 52 removes support for all NPAPI plug-ins other than Adobe Flash and adds support for preconnect link headers.
  • Chrome 66 Beta brings the CSS Typed Object Model, Async Clipboard API, AudioWorklets, and support to use calc(), min(), and max() in Media Queries. Additionally, select and textarea fields now support the autocomplete attribute, and the catch clause of a try statement can be used without a parameter from now on.
  • iOS 11.3 is available to the public now, and, as already announced, the release brings support for Progressive Web Apps to iOS. Maximiliano Firtman shares what this means, what works and what doesn’t work (yet).
  • Safari 11.1 is now available for everyone. Here is a summary of all the new WebKit features it includes.
Progressive Web Apps for iOS are here. Full screen, offline capable, and even visible in the iPad’s dock. (Image credit) General
  • Anil Dash reflects on what the web was intended to be and how today’s web differs from this: “At a time when millions are losing trust in the web’s biggest sites, it’s worth revisiting the idea that the web was supposed to be made out of countless little sites. Here’s a look at the neglected technologies that were supposed to make it possible.”
  • Morten Rand-Hendriksen wrote about using ethics in web design and what questions we should ask ourselves when suggesting a solution, creating a design, or a new feature. Especially when we think we’re making something ‘smart’, it’s important to put the question whether it actually helps people first.
  • A lot of protest and discussion came along with the Facebook / Cambridge Analytica affair, most of them pointing out the technological problems with Facebook’s permission model. But the crux lies in how Facebook designed their company and which ethical baseline they set. If we don’t want something like this to happen again, it’s upon us to design the service we want.
  • Brendan Dawes shares why he thinks URLs are a masterpiece and a user experience by themselves.
  • Charlie Owen’s talk transcription of “Dear Developer, The Web Isn’t About You” is a good summary of why we as developers need to think beyond what’s good for us and consider what serves the users and how we can achieve that instead.
UI/UX
  • B. Kaan Kavuştuk shares his thoughts about why we won’t be able to build a perfect design or codebase on the first try, no matter how much experience we have. Instead, it’s the constant small improvements that pave the way to perfection.
  • Trine Falbe introduces us to Ethical Design with a practical getting-started guide. It shows alternatives and things to think about when building a business or product. It doesn’t matter much if you’re the owner, a developer, a designer or a sales person, this is about serving users and setting the ground for real and sustainable trust.
  • Josh Lovejoy shares his learnings from working on inclusive tech solutions and why it takes more than good intention to create fair, inclusive technology. This article goes into depth of why human judgment is very difficult and often based on bias, and why it isn’t easy to design and develop algorithms that treat different people equally because of this.
  • The HSB (Hue, Saturation, Brightness) color system isn’t especially new, but a lot of people still don’t understand its advantages. Erik D. Kennedy explains its principles and advantages step-by-step.
  • While there’s more discussion about inclusive design these days, it’s often seen under the accessibility hat or as technical decisions. Robert del Prado now shares how important inclusive design thinking is and why it’s much more about the generic user than some specific people with specific disabilities. Inclusive design brings people together, regardless of who they are, where they live, and what they can afford. And isn’t it the goal of every product to be successful by acquiring as many people as possible? Maybe we need to discuss this with marketing people as well.
  • Anton Lovchikov shares ways to improve optical adjustments in components. It’s an interesting study on how very small changes can make quite a difference.
Afraid or angry? Which emotion we think the baby is showing depends on whether we think it’s a girl or a boy. Josh Lovejoy explains how personal bias and judgments like this one lead to unfair products. (Image credit) Tooling
  • Brian Schrader found an unknown feature in Git which is very helpful to test ideas quickly: Git Notes lets us add, remove, or read notes attached to objects, without touching the objects themselves and without needing to commit the current state.
  • For many projects, I prefer to use npm scripts over calling gulp or direct webpack tasks. Michael Kühnel shares some useful tricks for npm scripts, including how to allow CLI option parameters or how to watch tasks and alert notices on error.
  • Anton Sten explains why new tools don’t always equal productivity. We all love new design tools, and new ones such as Sketch, Figma, Xd, or Invision Studio keep popping up. But despite these tools solving a lot of common problems and making some things easier, productivity is mostly about what works for your problem and not what is newest. If you need to create a static mockup and Photoshop is what you know best, why not use it?
  • There’s a new, fast DNS service available by Cloudflare. Finally, a better alternative to the much used Google DNS servers, it is available under 1.1.1.1. The new DNS is the fastest, and probably one of the most secure ones, too, out there. Cloudflare put a lot of effort into encrypting the service and partnering up with Mozilla to make DNS over HTTPS work to close a big privacy gap that until now leaked all your browsing data to the DNS provider.
  • I heard a lot about iOS machine learning already, but despite the interesting fact that they’re able to do this on the device without sending everything to a cloud, I haven’t found out how to make use of this for apps, yet. Luckily, Manu Rink put together a nice guide in which she explains machine learning in iOS for beginners.
  • There’s great news for the Git GUI fans: Tower now offers a new beta version that includes pull request support, interactive rebase workflows, quick actions, reflog, and search. An amazing update that makes working with the software much faster than before, and even for me as a command line lover it’s a nice option.
Manu Rink shows how machine learning in iOS works by building an offline handwritten text recognition. (Image credit) Security Web Performance Accessibility CSS
  • Amber Wilson shares some insights into what it feels like to be thrown into a complex project in order to do the styling there. She rightly says that “nobody said CSS is easy” and expresses how important it is that we as developers face inconvenient situations in order to grow our knowledge.
  • Ana Tudor is known for her special CSS skills. Now she explores and describes how we can achieve scooped corners in CSS with some clever tricks.
Scooped corners? Ana Tudor shows how to do it. (Image credit) JavaScript
  • WebKit got an upgrade for the Clipboard API, and the team gives some very interesting insights into how it works and how Safari will handle some of the common challenges with clipboard data (e.g. images).
  • If you work with key value stores that live only in the frontend, IDB-Keyval is a great lightweight library that simplifies working with IndexedDB and localStorage.
  • Ever wanted to create graphics from your data with a hand-drawn, sketchy look on a website? Rough.js lets you do just that. It’s usually Canvas-based (for better performance and less data) but can also draw SVG paths.
  • If you need a drag-and-drop reorder module, there’s a smooth and accessible solution available now: dragon-drop.
  • For many years, we could only get CSS values in their computed value and even that wasn’t flexible or nice to work with. But now CSS has a proper object-based API for working with values in JavaScript: the CSS Typed Object Model. It’s only available in the upcoming Chrome 66 yet but definitely a promising feature I’d love to use in my code soon.
  • The React.js documentation now has an extra section that explains how to easily and programmatically manage focus states to ensure your UI is accessible.
  • James Milner shares how we can use abortable fetch to cancel requests.
  • There are a few articles about Web Push Notifications out there already, but Oleksii Rudenko’s getting-started guide is a great primer that explains the principles very well.
  • In the past years, we got a lot of new features on the JavaScript platform. And since it’s hard to remember all the new stuff, Raja Rao DV summed up “Everything new in ECMAScript 2016, 2017, and 2018”.
Work & Life
  • To raise awareness for how common such situations are for all of us, James Bennett shares an embarrassing situation where he made a simple mistake that took him a long time to find out. It’s not just me making mistakes, it’s not just you, and not just James — all of us make mistakes, and as embarrassing as they seem to be in that particular situation, there’s nothing to feel bad about.
  • Adam Blanchard says “People are machines. We need maintenance, too.” and creates a comparison for engineers to understand why we need to take care of ourselves and also why we need people who take care of us. This is an insight into what People Engineers do, and why it’s so important for companies to hire such people to ensure a team is healthy.
  • If there’s one thing we don’t talk much about in the web industry, it’s retirement. Jan Chipchase now wrote a lot of interesting thoughts all about retirement.
  • Rebecca Downes shares some insights into her PhD on remote teams, revealing under which circumstances remote teams are great and under which they’re not.
People need maintenance, too. That’s where the People Engineer comes in. (Image credit) Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, May 18th. Stay tuned.

(cm)
Categories: Web Design

Getting Started With the Mojs Animation Library: The Shape Module

Tuts+ Code - Web Development - Fri, 04/13/2018 - 05:00

In the previous tutorial, we used mojs to animate different HTML elements on a webpage. We used the library to mainly animate div elements which looked like squares or circles. However, you can use the Html module to animate all kinds of elements like images or headings. If you actually intend to animate basic shapes using mojs, you should probably use the Shape module from the library.

The Shape module allows you to create basic shapes in the DOM using SVG. All you have to do is specify the type of shape that you want to create, and mojs will take care of the rest. This module also allows you to animate different shapes that you create. 

In this tutorial, we will cover the basics of the Shape module and how you can use it to create different shapes and animate them.

Creating Shapes in Mojs

You need to instantiate a mojs Shape object in order to create different shapes. This object will accept different parameters which can be used to control the color, size, angle, etc. of the shapes that you create.

By default, any shape that you create will use the document body as its parent. You can specify any other element as its parent using the parent property. You can also assign a class to any shape that you create with the help of the className property. The library will not assign any default class if you skip this property.

Mojs has eight different shapes built in so that you can create them directly by setting a value for the shape property. You can set its value to circle to create circles, rect to create rectangles, and polygon to create polygons. You can also draw straight lines by setting the value of shape to be lines. The library will draw two perpendicular lines if the shape value is cross and a number of parallel lines if the shape is equal. Similarly, zigzag lines can be created by setting the property value to zigzag.

The shape object also has a points property which has different meanings for different shapes. It determines the total number of sides in a polygon and the total number of parallel lines in an equal shape. The points property can also be used to set the number of bends in a zigzag line.

As I mentioned earlier, mojs creates all these shapes using SVG. This means that the Shape object will also have some SVG specific properties to control the appearance of these shapes. You can set the fill color of a mojs shape using the fill property. When no color is specified, the library will use the deeppink color to fill the shape. Similarly, you can specify the stroke color for a shape using the stroke property. When no stroke color is specified, mojs keeps the stroke transparent. You can control the fill and stroke opacity for a shape using the fillOpacity and strokeOpacity properties. They can have any value between 0 and 1.

Mojs allows you to control other stroke-related properties of a shape as well. For instance, you can specify the pattern of dashes and gaps in a stroke path using the strokeDasharray property. This property accepts both strings and numbers as valid values. Its default value is zero, which means that the stroke will be a solid line. The width of a stroke can be specified using the strokeWidth property. All the strokes will be 2px wide by default. The shape of different lines at their end points can be specified using the strokeLinecap property. Valid values for strokeLinecap are butt, round, and square.

Any shape that you create is placed at the center of its parent element by default. This is because the left and right properties for a shape are set to 50% each. You can change the values of these properties to place the elements in different locations. Another way to control the position of a shape is with the help of the x and y properties. They determine how much a shape should be shifted in the horizontal and vertical direction respectively.

You can specify the radius of a shape using the radius property. This value is used to determine the size of a particular shape. You can also use radiusX and radiusY to specify the size of a shape in a particular direction. Another way of controlling the size of a shape is with the help of the scale property. The default value of scale is 1, but you can set it to any other number you like. You can also scale a shape in a particular direction using the scaleX and scaleY properties.

The origin of all these transformations of a shape is its center by default. For example, if you rotate any shape by specifying a value for the angle property, the shape will rotate around its center. If you want to rotate a shape around some other point, you can specify it using the origin property. This property accepts a string as its value. Setting it to '0% 0%' will rotate, scale or translate the shape around its top left corner. Similarly, setting it to '50% 0%' will rotate, scale or translate the shape around the center of its top edge.

You can use all these properties we just discussed to create a large variety of shapes. Here are a few examples:

var circleA = new mojs.Shape({ parent: ".container", shape: "circle", left: 0, fill: "orange", stroke: "black", strokeWidth: 5, isShowStart: true }); var circleB = new mojs.Shape({ parent: ".container", shape: "circle", left: 175, fill: "orange", stroke: "red", radiusX: 80, strokeWidth: 25, strokeDasharray: 2, isShowStart: true }); var rectA = new mojs.Shape({ parent: ".container", shape: "rect", left: 350, fill: "red", stroke: "black", strokeWidth: 5, isShowStart: true }); var rectB = new mojs.Shape({ parent: ".container", shape: "rect", left: 500, fill: "blue", stroke: "blue", radiusX: 40, radiusY: 100, strokeWidth: 25, strokeDasharray: 20, isShowStart: true }); var polyA = new mojs.Shape({ parent: ".container", shape: "polygon", top: 300, left: 0, fill: "yellow", stroke: "black", strokeWidth: 10, points: 8, isShowStart: true }); var polyB = new mojs.Shape({ parent: ".container", shape: "polygon", top: 350, left: 175, radiusX: 100, radiusY: 100, fill: "violet", stroke: "black", strokeWidth: 6, strokeDasharray: "15, 10, 5, 10", strokeLinecap: "round", points: 3, isShowStart: true }); var lineA = new mojs.Shape({ parent: ".container", shape: "cross", top: 350, left: 375, stroke: "red", strokeWidth: 40, isShowStart: true }); var zigzagA = new mojs.Shape({ parent: ".container", shape: "zigzag", top: 500, left: 50, fill: "transparent", stroke: "black", strokeWidth: 4, radiusX: 100, points: 10, isShowStart: true }); var zigzagB = new mojs.Shape({ parent: ".container", shape: "zigzag", top: 500, left: 350, fill: "blue", stroke: "transparent", radiusX: 100, points: 50, isShowStart: true });

The shapes created by the above code are shown in the CodePen demo below:

Animating Shapes in Mojs

You can animate almost all the properties of a shape that we discussed in the previous section. For instance, you can animate the number of points in a polygon by specifying different initial and final values. You can also animate the origin of a shape from '50% 50%' to any other value like '75% 75%'. Other properties like angle and scale behave just like they did in the Html module. 

The duration, delay and speed of different animations can be controlled using the duration, delay and speed properties respectively. The repeat property also works like it did in the Html module. You can set it to 0 if you want to play the animation only once. Similarly, you can set it to 3 to play the animation 4 times. All the easing values that were valid for the Html module are also valid for the Shape module.

The only difference between the animation capabilities of these two modules is that you cannot specify the animation parameters individually for the properties in the Shape module. All the properties that you are animating will have the same duration, delay, repetitions, etc.

Here is an example where we animate the x position, scale and angle of a circle:

var circleA = new mojs.Shape({ parent: ".container", shape: "circle", left: 175, fill: "red", stroke: "black", strokeWidth: 100, strokeDasharray: 18, isShowStart: true, x: { 0: 300 }, angle: { 0: 360 }, scale: { 0.5: 1.5 }, duration: 1000, repeat: 10, isYoyo: true, easing: "quad.in" });

One way to control the playback of different animations is by using the .then() method to specify a new set of properties to be animated after the first animation sequence has fully completed. You can give all animation properties new initial and final values inside .then(). Here is an example:

var polyA = new mojs.Shape({ parent: ".container", shape: "polygon", fill: "red", stroke: "black", isShowStart: true, points: 4, left: 100, x: { 0: 500 }, strokeWidth: { 5: 2 }, duration: 2000, easing: 'elastic.in' }).then({ strokeWidth: 5, points: { 4: 3 }, angle: { 0: 720 }, scale: { 1: 2 }, fill: { 'red': 'yellow' }, duration: 1000, delay: 200, easing: 'elastic.out' });

Final Thoughts

In this tutorial, we learned how to create different shapes using mojs and how to animate the properties of these shapes. 

The Shape module has all the animation capabilities of the Html module. The only difference is that the properties cannot be animated individually. They can only be animated as a group. You can also control the animation playback by using different methods to play, pause, stop and resume the animations at any point. I covered these methods in detail in the first tutorial of the series.

If you have any questions related to this tutorial, feel free to post a comment. In the next tutorial, you will learn about the ShapeSwirl and stagger modules in mojs.

Categories: Web Design

Automating Your Feature Testing With Selenium WebDriver

Smashing Magazine - Thu, 04/12/2018 - 03:45
Automating Your Feature Testing With Selenium WebDriver Automating Your Feature Testing With Selenium WebDriver Nils Schütte 2018-04-12T12:45:54+02:00 2018-04-19T14:33:52+00:00

This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the "New" icon.
    Creating a new project in Eclipse
  3. Choose the wizard to create a new "Java Project," and click “Next.”
    Choose the java-project wizard.
  4. Give your project a name, and click "Finish."
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    We successfully created a project to run the Selenium WebDriver.
Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to "Project" → “Properties.”
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to "Java Build Path" and then to register “Libraries.”
  5. Click on "Add External JARs."
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click "Open."
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the "New" button again (while you are in your new project’s folder).
    Create a new class to run the Selenium WebDriver.
  2. Choose the "Class" wizard, and click “Next.”
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, "RunTest"), and click “Finish.”
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; /** * @author Nils Schuette via frontendtest.org */ public class RunTest { static WebDriver webDriver; /** * @param args * @throws InterruptedException */ public static void main(final String[] args) throws InterruptedException { // Telling the system where to find the chrome driver System.setProperty( "webdriver.chrome.driver", "C:/PATH/TO/chromedriver.exe"); // Open the Chrome browser webDriver = new ChromeDriver(); // Waiting a bit before closing Thread.sleep(7000); // Closing the browser and WebDriver webDriver.close(); webDriver.quit(); } }
  5. Save your file, and click on the play button to run your code.
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    The Chrome Browser opens itself magically. (Large preview)
Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; /** * @author Nils Schuette via frontendtest.org */ public class RunTest { static WebDriver webDriver; /** * @param args * @throws InterruptedException */ public static void main(final String[] args) throws InterruptedException { // Telling the system where to find the chrome driver System.setProperty( "webdriver.chrome.driver", "C:/PATH/TO/chromedriver.exe"); // Open the Chrome browser webDriver = new ChromeDriver(); // Maximize the browser window webDriver.manage().window().maximize(); if (testWordpresslogin()) { System.out.println("Test Wordpress Login: Passed"); } else { System.out.println("Test Wordpress Login: Failed"); } // Close the browser and WebDriver webDriver.close(); webDriver.quit(); } private static boolean testWordpresslogin() { try { // Open google.com webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/"); // Type in the username webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); // Type in the password webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD"); // Click the Submit button webDriver.findElement(By.id("wp-submit")).click(); // Wait a little bit (7000 milliseconds) Thread.sleep(7000); // Check whether the h1 equals “Dashboard” if (webDriver.findElement(By.tagName("h1")).getText() .equals("Dashboard")) { return true; } else { return false; } // If anything goes wrong, return false. } catch (final Exception e) { System.out.println(e.getClass().toString()); return false; } } }

If you have done everything correctly, your output in the Eclipse console should look something like this:

The Eclipse console states that our first test has passed. (Large preview) Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver(); webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try{…}catch{…} blocks. If everything goes as expected, only the code in try{…} will be executed, but if anything goes wrong while executing try{…}, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch{…}. In my example, the test will be marked as "failed" whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/"); webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be "Dashboard," referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) { return true; } else { return false; }
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) { System.out.println(e.getClass().toString()); return false; }
Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

(rb, ra, al, il)
Categories: Web Design

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Smashing Magazine - Wed, 04/11/2018 - 08:00
Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them Lou Franco 2018-04-11T17:00:44+02:00 2018-04-19T14:33:52+00:00

Since iOS 5, Siri has helped iPhone users send messages, set reminders and look up restaurants with Apple’s apps. Starting in iOS 10, we have been able to use Siri in some of our own apps as well.

In order to use this functionality, your app must fit within Apple’s predefined Siri “domains and intents.” In this article, we’ll learn about what those are and see whether our apps can use them. We’ll take a simple app that is a to-do list manager and learn how to add Siri support. We’ll also go through the Apple developer website’s guidelines on configuration and Swift code for a new type of extension that was introduced with SiriKit: the Intents extension.

When you get to the coding part of this article, you will need Xcode (at least version 9.x), and it would be good if you are familiar with iOS development in Swift because we’re going to add Siri to a small working app. We’ll go through the steps of setting up a extension on Apple’s developer website and of adding the Siri extension code to the app.

“Hey Siri, Why Do I Need You?”

Sometimes I use my phone while on my couch, with both hands free, and I can give the screen my full attention. Maybe I’ll text my sister to plan our mom’s birthday or reply to a question in Trello. I can see the app. I can tap the screen. I can type.

But I might be walking around my town, listening to a podcast, when a text comes in on my watch. My phone is in my pocket, and I can’t easily answer while walking.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

With Siri, I can hold down my headphone’s control button and say, “Text my sister that I’ll be there by two o’clock.” Siri is great when you are on the go and can’t give full attention to your phone or when the interaction is minor, but it requires several taps and a bunch of typing.

This is fine if I want to use Apple apps for these interactions. But some categories of apps, like messaging, have very popular alternatives. Other activities, such as booking a ride or reserving a table in a restaurant, are not even possible with Apple’s built-in apps but are perfect for Siri.

Apple’s Approach To Voice Assistants

To enable Siri in third-party apps, Apple had to decide on a mechanism to take the sound from the user’s voice and somehow get it to the app in a way that it could fulfill the request. To make this possible, Apple requires the user to mention the app’s name in the request, but they had several options of what to do with the rest of the request.

  • It could have sent a sound file to the app.
    The benefit of this approach is that the app could try to handle literally any request the user might have for it. Amazon or Google might have liked this approach because they already have sophisticated voice-recognition services. But most apps would not be able to handle this very easily.
  • It could have turned the speech into text and sent that.
    Because many apps don’t have sophisticated natural-language implementations, the user would usually have to stick to very particular phrases, and non-English support would be up to the app developer to implement.
  • It could have asked you to provide a list of phrases that you understand.
    This mechanism is closer to what Amazon does with Alexa (in its “skills” framework), and it enables far more uses of Alexa than SiriKit can currently handle. In an Alexa skill, you provide phrases with placeholder variables that Alexa will fill in for you. For example, “Alexa, remind me at $TIME$ to $REMINDER$” — Alexa will run this phrase against what the user has said and tell you the values for TIME and REMINDER. As with the previous mechanism, the developer needs to do all of the translation, and there isn’t a lot of flexibility if the user says something slightly different.
  • It could define a list of requests with parameters and send the app a structured request.
    This is actually what Apple does, and the benefit is that it can support a variety of languages, and it does all of the work to try to understand all of the ways a user might phrase a request. The big downside is that you can only implement handlers for requests that Apple defines. This is great if you have, for example, a messaging app, but if you have a music-streaming service or a podcast player, you have no way to use SiriKit right now.

Similarly, there are three ways for apps to talk back to the user: with sound, with text that gets converted, or by expressing the kind of thing you want to say and letting the system figure out the exact way to express it. The last solution (which is what Apple does) puts the burden of translation on Apple, but it gives you limited ways to use your own words to describe things.

The kinds of requests you can handle are defined in SiriKit’s domains and intents. An intent is a type of request that a user might make, like texting a contact or finding a photo. Each intent has a list of parameters — for example, texting requires a contact and a message.

A domain is just a group of related intents. Reading a text and sending a text are both in the messaging domain. Booking a ride and getting a location are in the ride-booking domain. There are domains for making VoIP calls, starting workouts, searching for photos and a few more things. SiriKit’s documentation contains a full list of domains and their intents.

A common criticism of Siri is that it seems unable to handle requests as well as Google and Alexa, and that the third-party voice ecosystem enabled by Apple’s competitors is richer.

I agree with those criticisms. If your app doesn’t fit within the current intents, then you can’t use SiriKit, and there’s nothing you can do. Even if your app does fit, you can’t control all of the words Siri says or understands; so, if you have a particular way of talking about things in your app, you can’t always teach that to Siri.

The hope of iOS developers is both that Apple will greatly expand its list of intents and that its natural language processing becomes much better. If it does that, then we will have a voice assistant that works without developers having to do translation or understand all of the ways of saying the same thing. And implementing support for structured requests is actually fairly simple to do — a lot easier than building a natural language parser.

Another big benefit of the intents framework is that it is not limited to Siri and voice requests. Even now, the Maps app can generate an intents-based request of your app (for example, a restaurant reservation). It does this programmatically (not from voice or natural language). If Apple allowed apps to discover each other’s exposed intents, we’d have a much better way for apps to work together, (as opposed to x-callback style URLs).

Finally, because an intent is a structured request with parameters, there is a simple way for an app to express that parameters are missing or that it needs help distinguishing between some options. Siri can then ask follow-up questions to resolve the parameters without the app needing to conduct the conversation.

The Ride-Booking Domain

To understand domains and intents, let’s look at the ride-booking domain. This is the domain that you would use to ask Siri to get you a Lyft car.

Apple defines how to ask for a ride and how to get information about it, but there is actually no built-in Apple app that can actually handle this request. This is one of the few domains where a SiriKit-enabled app is required.

You can invoke one of the intents via voice or directly from Maps. Some of the intents for this domain are:

  • Request a ride
    Use this one to book a ride. You’ll need to provide a pick-up and drop-off location, and the app might also need to know your party’s size and what kind of ride you want. A sample phrase might be, “Book me a ride with <appname>.”
  • Get the ride’s status
    Use this intent to find out whether your request was received and to get information about the vehicle and driver, including their location. The Maps app uses this intent to show an updated image of the car as it is approaching you.
  • Cancel a ride
    Use this to cancel a ride that you have booked.

For any of this intents, Siri might need to know more information. As you’ll see when we implement an intent handler, your Intents extension can tell Siri that a required parameter is missing, and Siri will prompt the user for it.

The fact that intents can be invoked programmatically by Maps shows how intents might enable inter-app communication in the future.

Note: You can get a full list of domains and their intents on Apple’s developer website. There is also a sample Apple app with many domains and intents implemented, including ride-booking.

Adding Lists And Notes Domain Support To Your App

OK, now that we understand the basics of SiriKit, let’s look at how you would go about adding support for Siri in an app that involves a lot of configuration and a class for each intent you want to handle.

The rest of this article consists of the detailed steps to add Siri support to an app. There are five high-level things you need to do:

  1. Prepare to add a new extension to the app by creating provisioning profiles with new entitlements for it on Apple’s developer website.
  2. Configure your app (via its plist) to use the entitlements.
  3. Use Xcode’s template to get started with some sample code.
  4. Add the code to support your Siri intent.
  5. Configure Siri’s vocabulary via plists.

Don’t worry: We’ll go through each of these, explaining extensions and entitlements along the way.

To focus on just the Siri parts, I’ve prepared a simple to-do list manager, List-o-Mat.

Making lists in List-o-Mat (Large preview)

You can find the full source of the sample, List-o-Mat, on GitHub.

To create it, all I did was start with the Xcode Master-Detail app template and make both screens into a UITableView. I added a way to add and delete lists and items, and a way to check off items as done. All of the navigation is generated by the template.

To store the data, I used the Codable protocol, (introduced at WWDC 2017), which turns structs into JSON and saves it in a text file in the documents folder.

I’ve deliberately kept the code very simple. If you have any experience with Swift and making view controllers, then you should have no problem with it.

Now we can go through the steps of adding SiriKit support. The high-level steps would be the same for any app and whichever domain and intents you plan to implement. We’ll mostly be dealing with Apple’s developer website, editing plists and writing a bit of Swift.

For List-o-Mat, we’ll focus on the lists and notes domain, which is broadly applicable to things like note-taking apps and to-do lists.

In the lists and notes domain, we have the following intents that would make sense for our app.

  • Get a list of tasks.
  • Add a new task to a list.

Because the interactions with Siri actually happen outside of your app (maybe even when you app is not running), iOS uses an extension to implement this.

The Intents Extension

If you have not worked with extensions, you’ll need to know three main things:

  1. An extension is a separate process. It is delivered inside of your app’s bundle, but it runs completely on its own, with its own sandbox.
  2. Your app and extension can communicate with each other by being in the same app group. The easiest way is via the group’s shared sandbox folders (so, they can read and write to the same files if you put them there).
  3. Extensions require their own app IDs, profiles and entitlements.

To add an extension to your app, start by logging into your developer account and going to the “Certificates, Identifiers, & Profiles” section.

Updating Your Apple Developer App Account Data

In our Apple developer account, the first thing we need to do is create an app group. Go to the “App Groups” section under “Identifiers” and add one.

Registering an app group (Large preview)

It must start with group, followed by your usual reverse domain-based identifier. Because it has a prefix, you can use your app’s identifier for the rest.

Then, we need to update our app’s ID to use this group and to enable Siri:

  1. Go to the “App IDs” section and click on your app’s ID;
  2. Click the “Edit” button;
  3. Enable app groups (if not enabled for another extension).
    Enable app groups (Large preview)
  4. Then configure the app group by clicking the “Edit” button. Choose the app group from before.
    Set the name of the app group (Large preview)
  5. Enable SiriKit.
    Enable SiriKit (Large preview)
  6. Click “Done” to save it.

Now, we need to create a new app ID for our extension:

  1. In the same “App IDs” section, add a new app ID. This will be your app’s identifier, with a suffix. Do not use just Intents as a suffix because this name will become your module’s name in Swift and would then conflict with the real Intents.
    Create an app ID for the Intents extension (Large preview)
  2. Enable this app ID for app groups as well (and set up the group as we did before).

Now, create a development provisioning profile for the Intents extension, and regenerate your app’s provisioning profile. Download and install them as you would normally do.

Now that our profiles are installed, we need to go to Xcode and update the app’s entitlements.

Updating Your App’s Entitlements In Xcode

Back in Xcode, choose your project’s name in the project navigator. Then, choose your app’s main target, and go to the “Capabilities” tab. In there, you will see a switch to turn on Siri support.

Enable SiriKit in your app's entitlements. (Large preview)

Further down the list, you can turn on app groups and configure it.

Configure the app's app group (Large preview)

If you have set it up correctly, you’ll see this in your app’s .entitlements file:

The plist shows the entitlements that you set (Large preview)

Now, we are finally ready to add the Intents extension target to our project.

Adding The Intents Extension

We’re finally ready to add the extension. In Xcode, choose “File” → “New Target.” This sheet will pop up:

Add the Intents extension to your project (Large preview)

Choose “Intents Extension” and click the “Next” button. Fill out the following screen:

Configure the Intents extension (Large preview)

The product name needs to match whatever you made the suffix in the intents app ID on the Apple developer website.

We are choosing not to add an intents UI extension. This isn’t covered in this article, but you could add it later if you need one. Basically, it’s a way to put your own branding and display style into Siri’s visual results.

When you are done, Xcode will create an intents handler class that we can use as a starting part for our Siri implementation.

The Intents Handler: Resolve, Confirm And Handle

Xcode generated a new target that has a starting point for us.

The first thing you have to do is set up this new target to be in the same app group as the app. As before, go to the “Capabilities” tab of the target, and turn on app groups, and configure it with your group name. Remember, apps in the same group have a sandbox that they can use to share files with each other. We need this in order for Siri requests to get to our app.

List-o-Mat has a function that returns the group document folder. We should use it whenever we want to read or write to a shared file.

func documentsFolder() -> URL? { return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.com.app-o-mat.ListOMat") }

For example, when we save the lists, we use this:

func save(lists: Lists) { guard let docsDir = documentsFolder() else { fatalError("no docs dir") } let url = docsDir.appendingPathComponent(fileName, isDirectory: false) // Encode lists as JSON and save to url }

The Intents extension template created a file named IntentHandler.swift, with a class named IntentHandler. It also configured it to be the intents’ entry point in the extension’s plist.

The intent extension plist configures IntentHandler as the entry point

In this same plist, you will see a section to declare the intents we support. We’re going to start with the one that allows searching for lists, which is named INSearchForNotebookItemsIntent. Add it to the array under IntentsSupported.

Add the intent’s name to the intents plist (Large preview)

Now, go to IntentHandler.swift and replace its contents with this code:

import Intents class IntentHandler: INExtension { override func handler(for intent: INIntent) -> Any? { switch intent { case is INSearchForNotebookItemsIntent: return SearchItemsIntentHandler() default: return nil } } }

The handler function is called to get an object to handle a specific intent. You can just implement all of the protocols in this class and return self, but we’ll put each intent in its own class to keep it better organized.

Because we intend to have a few different classes, let’s give them a common base class for code that we need to share between them:

class ListOMatIntentsHandler: NSObject { }

The intents framework requires us to inherit from NSObject. We’ll fill in some methods later.

We start our search implementation with this:

class SearchItemsIntentHandler: ListOMatIntentsHandler, INSearchForNotebookItemsIntentHandling { }

To set an intent handler, we need to implement three basic steps

  1. Resolve the parameters.
    Make sure required parameters are given, and disambiguate any you don’t fully understand.
  2. Confirm that the request is doable.
    This is often optional, but even if you know that each parameter is good, you might still need access to an outside resource or have other requirements.
  3. Handle the request.
    Do the thing that is being requested.

INSearchForNotebookItemsIntent, the first intent we’ll implement, can be used as a task search. The kinds of requests we can handle with this are, “In List-o-Mat, show the grocery store list” or “In List-o-Mat, show the store list.”

Aside: “List-o-Mat” is actually a bad name for a SiriKit app because Siri has a hard time with hyphens in apps. Luckily, SiriKit allows us to have alternate names and to provide pronunciation. In the app’s Info.plist, add this section:

Add alternate app name's and pronunciation guides to the app plist

This allows the user to say “list oh mat” and for that to be understood as a single word (without hyphens). It doesn’t look ideal on the screen, but without it, Siri sometimes thinks “List” and “Mat” are separate words and gets very confused.

Resolve: Figuring Out The Parameters

For a search for notebook items, there are several parameters:

  1. the item type (a task, a task list, or a note),
  2. the title of the item,
  3. the content of the item,
  4. the completion status (whether the task is marked done or not),
  5. the location it is associated with,
  6. the date it is associated with.

We require only the first two, so we’ll need to write resolve functions for them. INSearchForNotebookItemsIntent has methods for us to implement.

Because we only care about showing task lists, we’ll hardcode that into the resolve for item type. In SearchItemsIntentHandler, add this:

func resolveItemType(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INNotebookItemTypeResolutionResult) -> Void) { completion(.success(with: .taskList)) }

So, no matter what the user says, we’ll be searching for task lists. If we wanted to expand our search support, we’d let Siri try to figure this out from the original phrase and then just use completion(.needsValue()) if the item type was missing. Alternatively, we could try to guess from the title by seeing what matches it. In this case, we would complete with success when Siri knows what it is, and we would use completion(.notRequired()) when we are going to try multiple possibilities.

Title resolution is a little trickier. What we want is for Siri to use a list if it finds one with an exact match for what you said. If it’s unsure or if there is more than one possibility, then we want Siri to ask us for help in figuring it out. To do this, SiriKit provides a set of resolution enums that let us express what we want to happen next.

So, if you say “Grocery Store,” then Siri would have an exact match. But if you say “Store,” then Siri would present a menu of matching lists.

We’ll start with this function to give the basic structure:

func resolveTitle(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) { guard let title = intent.title else { completion(.needsValue()) return } let possibleLists = getPossibleLists(for: title) completeResolveListName(with: possibleLists, for: title, with: completion) }

We’ll implement getPossibleLists(for:) and completeResolveListName(with:for:with:) in the ListOMatIntentsHandler base class.

getPossibleLists(for:) needs to try to fuzzy match the title that Siri passes us with the actual list names.

public func getPossibleLists(for listName: INSpeakableString) -> [INSpeakableString] { var possibleLists = [INSpeakableString]() for l in loadLists() { if l.name.lowercased() == listName.spokenPhrase.lowercased() { return [INSpeakableString(spokenPhrase: l.name)] } if l.name.lowercased().contains(listName.spokenPhrase.lowercased()) || listName.spokenPhrase.lowercased() == "all" { possibleLists.append(INSpeakableString(spokenPhrase: l.name)) } } return possibleLists }

We loop through all of our lists. If we get an exact match, we’ll return it, and if not, we’ll return an array of possibilities. In this function, we’re simply checking to see whether the word the user said is contained in a list name (so, a pretty simple match). This lets “Grocery” match “Grocery Store.” A more advanced algorithm might try to match based on words that sound the same (for example, with the Soundex algorithm),

completeResolveListName(with:for:with:) is responsible for deciding what to do with this list of possibilities.

public func completeResolveListName(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) { switch possibleLists.count { case 0: completion(.unsupported()) case 1: if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() { completion(.success(with: possibleLists[0])) } else { completion(.confirmationRequired(with: possibleLists[0])) } default: completion(.disambiguation(with: possibleLists)) } }

If we got an exact match, we tell Siri that we succeeded. If we got one inexact match, we tell Siri to ask the user if we guessed it right.

If we got multiple matches, then we use completion(.disambiguation(with: possibleLists)) to tell Siri to show a list and let the user pick one.

Now that we know what the request is, we need to look at the whole thing and make sure we can handle it.

Confirm: Check All Of Your Dependencies

In this case, if we have resolved all of the parameters, we can always handle the request. Typical confirm() implementations might check the availability of external services or check authorization levels.

Because confirm() is optional, we could just do nothing, and Siri would assume we could handle any request with resolved parameters. To be explicit, we could use this:

func confirm(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) { completion(INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil)) }

This means we can handle anything.

Handle: Do It

The final step is to handle the request.

func handle(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) { guard let title = intent.title, let list = loadLists().filter({ $0.name.lowercased() == title.spokenPhrase.lowercased()}).first else { completion(INSearchForNotebookItemsIntentResponse(code: .failure, userActivity: nil)) return } let response = INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil) response.tasks = list.items.map { return INTask(title: INSpeakableString(spokenPhrase: $0.name), status: $0.done ? INTaskStatus.completed : INTaskStatus.notCompleted, taskType: INTaskType.notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: "\(list.name)\t\($0.name)") } completion(response) }

First, we find the list based on the title. At this point, resolveTitle has already made sure that we’ll get an exact match. But if there’s an issue, we can still return a failure.

When we have a failure, we have the option of passing a user activity. If your app uses Handoff and has a way to handle this exact type of request, then Siri might try deferring to your app to try the request there. It will not do this when we are in a voice-only context (for example, you started with “Hey Siri”), and it doesn’t guarantee that it will do it in other cases, so don’t count on it.

This is now ready to test. Choose the intent extension in the target list in Xcode. But before you run it, edit the scheme.

Edit the scheme of the the intent to add a sample phrase for debugging.

That brings up a way to provide a query directly:

Add the sample phrase to the Run section of the scheme. (Large preview)

Notice, I am using “ListOMat” because of the hyphens issue mentioned above. Luckily, it’s pronounced the same as my app’s name, so it should not be much of an issue.

Back in the app, I made a “Grocery Store” list and a “Hardware Store” list. If I ask Siri for the “store” list, it will go through the disambiguation path, which looks like this:

Siri handles the request by asking for clarification. (Large preview)

If you say “Grocery Store,” then you’ll get an exact match, which goes right to the results.

Adding Items Via Siri

Now that we know the basic concepts of resolve, confirm and handle, we can quickly add an intent to add an item to a list.

First, add INAddTasksIntent to the extension’s plist:

Add the INAddTasksIntent to the extension plist (Large preview)

Then, update our IntentHandler’s handle function.

override func handler(for intent: INIntent) -> Any? { switch intent { case is INSearchForNotebookItemsIntent: return SearchItemsIntentHandler() case is INAddTasksIntent: return AddItemsIntentHandler() default: return nil } }

Add a stub for the new class:

class AddItemsIntentHandler: ListOMatIntentsHandler, INAddTasksIntentHandling { }

Adding an item needs a similar resolve for searching, except with a target task list instead of a title.

func resolveTargetTaskList(for intent: INAddTasksIntent, with completion: @escaping (INTaskListResolutionResult) -> Void) { guard let title = intent.targetTaskList?.title else { completion(.needsValue()) return } let possibleLists = getPossibleLists(for: title) completeResolveTaskList(with: possibleLists, for: title, with: completion) }

completeResolveTaskList is just like completeResolveListName, but with slightly different types (a task list instead of the title of a task list).

public func completeResolveTaskList(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INTaskListResolutionResult) -> Void) { let taskLists = possibleLists.map { return INTaskList(title: $0, tasks: [], groupName: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil) } switch possibleLists.count { case 0: completion(.unsupported()) case 1: if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() { completion(.success(with: taskLists[0])) } else { completion(.confirmationRequired(with: taskLists[0])) } default: completion(.disambiguation(with: taskLists)) } }

It has the same disambiguation logic and behaves in exactly the same way. Saying “Store” needs to be disambiguated, and saying “Grocery Store” would be an exact match.

We’ll leave confirm unimplemented and accept the default. For handle, we need to add an item to the list and save it.

func handle(intent: INAddTasksIntent, completion: @escaping (INAddTasksIntentResponse) -> Void) { var lists = loadLists() guard let taskList = intent.targetTaskList, let listIndex = lists.index(where: { $0.name.lowercased() == taskList.title.spokenPhrase.lowercased() }), let itemNames = intent.taskTitles, itemNames.count > 0 else { completion(INAddTasksIntentResponse(code: .failure, userActivity: nil)) return } // Get the list var list = lists[listIndex] // Add the items var addedTasks = [INTask]() for item in itemNames { list.addItem(name: item.spokenPhrase, at: list.items.count) addedTasks.append(INTask(title: item, status: .notCompleted, taskType: .notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil)) } // Save the new list lists[listIndex] = list save(lists: lists) // Respond with the added items let response = INAddTasksIntentResponse(code: .success, userActivity: nil) response.addedTasks = addedTasks completion(response) }

We get a list of items and a target list. We look up the list and add the items. We also need to prepare a response for Siri to show with the added items and send it to the completion function.

This function can handle a phrase like, “In ListOMat, add apples to the grocery list.” It can also handle a list of items like, “rice, onions and olives.”

Siri adds a few items to the grocery store list Almost Done, Just A Few More Settings

All of this will work in your simulator or local device, but if you want to submit this, you’ll need to add a NSSiriUsageDescription key to your app’s plist, with a string that describes what you are using Siri for. Something like “Your requests about lists will be sent to Siri” is fine.

You should also add a call to:

INPreferences.requestSiriAuthorization { (status) in }

Put this in your main view controller’s viewDidLoad to ask the user for Siri access. This will show the message you configured above and also let the user know that they could be using Siri for this app.

The device will ask for permission if you try to use Siri in the app.

Finally, you’ll need to tell Siri what to tell the user if the user asks what your app can do, by providing some sample phrases:

  1. Create a plist file in your app (not the extension), named AppIntentVocabulary.plist.
  2. Fill out the intents and phrases that you support.
Add an AppIntentVocabulary.plist to list the sample phrases that will invoke the intent you handle. (Large preview)

There is no way to really know all of the phrases that Siri will use for an intent, but Apple does provide a few samples for each intent in its documentation. The sample phrases for task-list searching show us that Siri can understand “Show me all my notes on <appName>,” but I found other phrases by trial and error (for example, Siri understands what “lists” are too, not just notes).

Summary

As you can see, adding Siri support to an app has a lot of steps, with a lot of configuration. But the code needed to handle the requests was fairly simple.

There are a lot of steps, but each one is small, and you might be familiar with a few of them if you have used extensions before.

Here is what you’ll need to prepare for a new extension on Apple’s developer website:

  1. Make an app ID for an Intents extension.
  2. Make an app group if you don’t already have one.
  3. Use the app group in the app ID for the app and extension.
  4. Add Siri support to the app’s ID.
  5. Regenerate the profiles and download them.

And here are the steps in Xcode for creating Siri’s Intents extension:

  1. Add an Intents extension using the Xcode template.
  2. Update the entitlements of the app and extension to match the profiles (groups and Siri support).
  3. Add your intents to the extension’s plist.

And you’ll need to add code to do the following things:

  1. Use the app group sandbox to communicate between the app and extension.
  2. Add classes to support each intent with resolve, confirm and handle functions.
  3. Update the generated IntentHandler to use those classes.
  4. Ask for Siri access somewhere in your app.

Finally, there are some Siri-specific configuration settings:

  1. Add the Siri support security string to your app’s plist.
  2. Add sample phrases to an AppIntentVocabulary.plist file in your app.
  3. Run the intent target to test; edit the scheme to provide the phrase.

OK, that is a lot, but if your app fits one of Siri’s domains, then users will expect that they can interact with it via voice. And because the competition for voice assistants is so good, we can only expect that WWDC 2018 will bring a bunch more domains and, hopefully, much better Siri.

Further Reading (da, ra, al, il)
Categories: Web Design

Distrust of the Symantec PKI: Immediate action needed by site operators

Google Webmaster Central Blog - Wed, 04/11/2018 - 06:58
Cross-posted from the Google Security Blog.

We previously announced plans to deprecate Chrome’s trust in the Symantec certificate authority (including Symantec-owned brands like Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL). This post outlines how site operators can determine if they’re affected by this deprecation, and if so, what needs to be done and by when. Failure to replace these certificates will result in site breakage in upcoming versions of major browsers, including Chrome.

Chrome 66

If your site is using a SSL/TLS certificate from Symantec that was issued before June 1, 2016, it will stop functioning in Chrome 66, which could already be impacting your users.

If you are uncertain about whether your site is using such a certificate, you can preview these changes in Chrome Canary to see if your site is affected. If connecting to your site displays a certificate error or a warning in DevTools as shown below, you’ll need to replace your certificate. You can get a new certificate from any trusted CA, including Digicert, which recently acquired Symantec’s CA business.

An example of a certificate error that Chrome 66 users might see if you are using a Legacy Symantec SSL/TLS certificate that was issued before June 1, 2016. 
The DevTools message you will see if you need to replace your certificate before Chrome 66.Chrome 66 has already been released to the Canary and Dev channels, meaning affected sites are already impacting users of these Chrome channels. If affected sites do not replace their certificates by March 15, 2018, Chrome Beta users will begin experiencing the failures as well. You are strongly encouraged to replace your certificate as soon as possible if your site is currently showing an error in Chrome Canary.

Chrome 70

Starting in Chrome 70, all remaining Symantec SSL/TLS certificates will stop working, resulting in a certificate error like the one shown above. To check if your certificate will be affected, visit your site in Chrome today and open up DevTools. You’ll see a message in the console telling you if you need to replace your certificate.

The DevTools message you will see if you need to replace your certificate before Chrome 70.If you see this message in DevTools, you’ll want to replace your certificate as soon as possible. If the certificates are not replaced, users will begin seeing certificate errors on your site as early as July 20, 2018. The first Chrome 70 Beta release will be around September 13, 2018.

Expected Chrome Release Timeline

The table below shows the First Canary, First Beta and Stable Release for Chrome 66 and 70. The first impact from a given release will coincide with the First Canary, reaching a steadily widening audience as the release hits Beta and then ultimately Stable. Site operators are strongly encouraged to make the necessary changes to their sites before the First Canary release for Chrome 66 and 70, and no later than the corresponding Beta release dates.

ReleaseFirst CanaryFirst BetaStable ReleaseChrome 66January 20, 2018~ March 15, 2018~ April 17, 2018Chrome 70~ July 20, 2018~ September 13, 2018~ October 16, 2018
For information about the release timeline for a particular version of Chrome, you can also refer to the Chromium Development Calendar which will be updated should release schedules change.
In order to address the needs of certain enterprise users, Chrome will also implement an Enterprise Policy that allows disabling the Legacy Symantec PKI distrust starting with Chrome 66. As of January 1, 2019, this policy will no longer be available and the Legacy Symantec PKI will be distrusted for all users.

Special Mention: Chrome 65

As noted in the previous announcement, SSL/TLS certificates from the Legacy Symantec PKI issued after December 1, 2017 are no longer trusted. This should not affect most site operators, as it requires entering in to special agreement with DigiCert to obtain such certificates. Accessing a site serving such a certificate will fail and the request will be blocked as of Chrome 65. To avoid such errors, ensure that such certificates are only served to legacy devices and not to browsers such as Chrome.


Posted by Devon O’Brien, Ryan Sleevi, Emily Stark, Chrome security team
Categories: Web Design

Getting Started With the Mojs Animation Library: The HTML Module

Tuts+ Code - Web Development - Wed, 04/11/2018 - 05:00

A lot of websites now use some sort of animation to make their landing pages more appealing. Thankfully, there are many libraries which allow you to animate elements on a webpage without doing everything from scratch. In this tutorial, you will learn about one such library called mojs.

The library is very easy to use because of its simple declarative API. The animations that you can create with mojs will all be smooth and retina ready so that everything looks professional.

Installing Mojs

There are many ways to include mojs in your projects. You can grab the library from its GitHub repository. Alternatively, you can directly include a link to the latest version of the library from different CDNs in your project. 

<script src="//cdn.jsdelivr.net/mojs/latest/mo.min.js"></script>

Developers can also install mojs using package managers like npm and bower by running the following command:

npm install mo-js bower install mojs

Once you have included the library in your project, you can start using different modules to create interesting animations.

The HTML Module in Mojs

This tutorial will focus on the HTML module in the mojs library. This module can be used to animate different HTML elements on the webpage.

The first thing that you need to do in order to animate a DOM element is create a mojs Html object. You can specify the selector of an element and its properties that you want to animate inside this object.

Setting a value for el will let you identify the element which you want to animate using mojs. You can either set its value as a selector or a DOM node.

The HTML module has some predefined attributes which can be used to animate different transform-related properties of a DOM element. For example, you can control the translation animation of an element along the x, y and z axes by specifying start and end values for the x, y and z properties. You can also rotate any element along different axes by using the angleX, angleY and angleZ properties. This is similar to the corresponding rotateX(), rotateY() and rotateZ() transforms in CSS. You can also skew an element along the X or Y axis with the help of the skewX and skewY properties.

Applying scaling animations on different elements is just as easy. If you want to scale an element in both directions, simply set a value for the scale property. Similarly, you can animate the scaling of elements along different axes by setting appropriate values for the scaleX and scaleY properties.

Besides all these properties which let you set the initial and final values of the animation, there are some other properties which control the animation playback. You can specify the duration of the animation using the duration property. The provided value needs a number, and it will set the animation duration in milliseconds. If you want to start an animation after some delay, you can set the delay value using the delay property. Just like duration, delay also expects its value to be a number.

Animations can be repeated more than once with the help of the repeat property. Its default value is zero, which means that the animation would be played only once. Setting it to 1 will play the animation twice, and setting it to 2 will play the animation three times. Once the animation has completed its first iteration, the element will go back to its initial state and start animating again (if you have set a value for the repeat property). This sudden jump from the final state to initial state may not be desirable in all cases. 

If you want the animation to play backwards once it has reached the final state, you can set the value of the isYoyo property to true. It is set to false by default. Finally, you can set the speed at which the animation should be played using the speed property. Its default value is 1. Setting it to 2 will play the animation twice as fast. Similarly, setting it to 0.5 will play the animation at half the speed.

The mojs Html objects that you created will not animate the respective elements by themselves. You will have to call the play() method on each mojs Html animation that you want to play. Here is an example which animates three different boxes using all the properties we just discussed:

var htmlA = new mojs.Html({ el: ".a", x: { 0: 400 }, angleZ: { 0: 720 }, duration: 1000, repeat: 4, isYoyo: true }); var htmlB = new mojs.Html({ el: ".b", x: { 400: 0 }, angleY: { 0: 360 }, angleZ: { 0: 720 }, duration: 1000, repeat: 4 }); var htmlC = new mojs.Html({ el: ".c", x: { 0: 400 }, angleY: { 0: 360 }, scaleZ: { 1: 2 }, skewX: { 0: 30 }, duration: 1000, repeat: 4, isYoyo: true }); document.querySelector(".play").addEventListener("click", function() { htmlA.play(); htmlB.play(); htmlC.play(); });

You are not limited to just animating the transform properties of an element. The mojs animation library allows you to animate all other animatable CSS properties as well. You just have to make sure that you provide valid initial and final values for them. For instance, you can animate the background color of an element by specifying valid values for background. 

If the CSS property that you want to animate contains a hyphen, you can remove the hyphen and convert the property name to camelCase when setting initial and final values inside the mojs Html object. This means that you can animate the border-radius by setting a valid value for the borderRadius property. The following example should make everything clear:

var htmlA = new mojs.Html({ el: ".a", x: { 0: 400 }, angleZ: { 0: 360 }, background: { red: 'black' }, borderWidth: { 10: 20 }, borderColor: { 'black': 'red' }, borderRadius: { 0: '50%' }, duration: 2000, repeat: 4, isYoyo: true }); document.querySelector(".play").addEventListener("click", function() { htmlA.play(); });

In the above example, the border color changes from black to red while the border radius animates from 0 to 50%. You should note that a unitless number will be considered a pixel value, while a number with units should be specified as a string like '50%'.

So far we have used a single set of tween properties to control the playback of different animations. This meant that an element would take the same time to move from x:0 to x:200 as it will take to scale from scale:1 to scale:2. 

This may not be a desirable behavior as you might want to delay the animation of some properties and control their duration as well. In such cases, you can specify the animation playback parameters of each property inside the property object itself.

var htmlA = new mojs.Html({ el: ".a", x: { 0: 400, duration: 1000, repeat: 8, isYoyo: true }, angleY: { 0: 360, delay: 500, duration: 1000, repeat: 4, isYoyo: true }, angleZ: { 0: 720, delay: 1000, duration: 1000, repeat: 4, isYoyo: true } }); document.querySelector(".play").addEventListener("click", function() { htmlA.play(); });

Easing Functions Available in Mojs

Every animation that you create will have the sin.out easing applied to it by default. If you want the animation playback to progress using a different easing function, you can specify its value using the easing property. By default, the value specified for easing is also used when the animation is playing backwards. If you want to apply a different easing for backward animations, you can set a value for the backwardEasing property.

The mojs library has 11 different built-in easing functions. These are linear, ease, sin, quad, cubic, quart, quint, expo, circ, back, and elastic. The linear easing only has one variation named linear.none. This makes sense because the animation will progress with the same speed at different stages. All other easing functions have three different variations with in, out and inout appended at the end.

There are two methods to specify the easing function for an animation. You can either use a string like elastic.in or you can access the easing functions directly using the mojs.easing object like mojs.easing.elastic.inout. In the embedded CodePen demo, I have applied different easing functions on each bar so its width will change at a different pace. This will give you an idea of how the animation speed differs with each easing.

var allBoxes = document.querySelectorAll(".box"); var animations = new Array(); var easings = ['ease.in', 'sin.in', 'quad.in', 'cubic.in', 'quart.in', 'quint.in', 'expo.in', 'circ.in', 'back.in', 'elastic.in']; allBoxes.forEach(function(box, index) { var animation = new mojs.Html({ el: box, width: { 0: 550, duration: 4000, repeat: 8, isYoyo: true, easing: easings[index] } }); animations.push(animation); }); document.querySelector(".play").addEventListener("click", function() { animations.forEach(function(anim) { anim.play(); }); });

Since we only want to change the easing function applied to each box, I have created a loop to iterate over them and then apply an easing function picked up from the easings array. This prevents unnecessary code duplication. You can use the same technique to animate multiple elements where the property values vary based on a pattern.

Controlling Animation Playback

Mojs provides a lot of methods which allow us to control the animation playback for different elements once it has already started. You can pause the animation at any time by calling the pause() method. Similarly, you can resume any paused animation by calling the resume() method. 

Animations that have been paused using pause() will always resume from the point at which you called pause(). If you want the animation to start from the beginning after it has been paused, you should use the stop() method instead.

You can also play the animation backwards using the playBackward() method. Earlier, we used the speed property to control the speed at which mojs played an animation. Mojs also has a setSpeed() method which can set the animation speed while it is still in progress. In the following example, I have used all these methods to control the animation playback based on button clicks.

var speed = 1; var htmlA = new mojs.Html({ el: ".a", angleZ: { 0: 720 }, borderRadius: { 0: '50%' }, borderWidth: { 10: 100 }, duration: 2000, repeat: 9999, isYoyo: true }); document.querySelector(".play").addEventListener("click", function() { htmlA.play(); }); document.querySelector(".pause").addEventListener("click", function() { htmlA.pause(); }); document.querySelector(".stop").addEventListener("click", function() { htmlA.stop(); }); document.querySelector(".faster").addEventListener("click", function() { speed = speed + 1; htmlA.setSpeed(speed); document.querySelector(".data").innerHTML = "Speed: " + speed; }); document.querySelector(".slower").addEventListener("click", function() { speed = speed/2; htmlA.setSpeed(speed); document.querySelector(".data").innerHTML = "Speed: " + speed; });

In the following demo, the current animation playback speed is shown in the black box in the bottom left corner. Clicking on Faster will increase the current speed by 1, and clicking on Slower will halve the current speed.

Final Thoughts

In this tutorial, we learned how to animate different DOM elements on a webpage using the HTML module in mojs. We can specify the element we want to animate using either a selector or a DOM node. The library allows you to animate different transform properties and the opacity of any element using the built-in properties of the mojs Html object. However, you can also animate other CSS properties by specifying the name using camelCase notation.

JavaScript is not without its learning curves, and there are plenty of frameworks and libraries to keep you busy, as well. If you’re looking for additional resources to study or to use in your work, check out what we have available in the Envato Market.

Let me know if there is anything you would like me to clarify in this tutorial. We will cover the Shape module from the mojs library in the next tutorial.

Categories: Web Design

Pages

1 2 3 4 5 6 7 8 9 next › last »