emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Percona Live 2018 Sessions: Ghostferry – the Swiss Army Knife of Live Data Migrations with Minimum Downtime

Planet MySQL - Tue, 04/24/2018 - 18:11

In this blog post on Percona Live 2018 sessions, we’ll talk with Shuhoa Wu, Software Developer for Shopify, Inc. about how Ghostferry is the Swiss Army knife of live data migrations.

Existing tools like mysqldump and replication cannot migrate data between GTID-enabled MySQL and non-GTID-enabled MySQL – a common configuration across multiple cloud providers that cannot be changed. These tools are also cumbersome to operate and error-prone, thus requiring a DBA’s attention for each data migration. Shopify’s team introduced a tool that allows for easy migration of data between MySQL databases with constant downtime on the order of seconds.

Inspired by gh-ost, their tool is named Ghostferry and allows application developers at Shopify to migrate data without assistance from DBAs. It has been used to rebalance sharded data across databases. They open sourced Ghostferry at the Percona Live 2018 conference so that anyone can migrate their own data with minimal hassle and downtime. Since Shopify wrote Ghostferry as a library, you can use it to build specialized data movers that move arbitrary subsets of data from one database to another.

Shuhao walked through what data migration is, how it works, and how Ghostferry works to make this process simpler and standard across platforms – especially in systems (like cloud providers such as AWS or Google) where you don’t have control of the instances. Ghostferry also simplifies the replication process and allows someone to copy across instances with a single Ghostferry command, rather than having to understand both the source and target instances.

After the Percona Live 2018 sessions talk, I had a chance to speak with Shuhao about Ghostferry, Check it out below.

The post Percona Live 2018 Sessions: Ghostferry – the Swiss Army Knife of Live Data Migrations with Minimum Downtime appeared first on Percona Database Performance Blog.

Categories: Web Technologies

MySQL Community Awards 2018: the Winners

Planet MySQL - Tue, 04/24/2018 - 16:18

The MySQL Community Awards initiative is an effort to acknowledge and thank individuals and corporations for their contributions to the MySQL ecosystem. It is a from-the-community, by-the-community, and for-the-community effort. The committee is composed of an independent group of community members of different orientation and opinion, themselves past winners or known contributors to the community.

The 2018 community awards were presented on April 23, 2018, during the Welcome Reception at the Percona Live conference. The winners are:

MySQL Community Awards: Community Contributor of the year 2018

  • Jean-François Gagné
    Jean-François was nominated for his many blog posts, bug reports, and experiment results that make MySQL much better. Here is his blog: https://jfg-mysql.blogspot.com/
  • Sveta Smirnova
    Sveta spreads knowledge and good practice on all things MySQL as a frequent speaker and blogger. Her years of experience in testing, support, and consulting are shared in webinars, technical posts, conferences around the world and in her book “MySQL Troubleshooting”.

MySQL Community Awards: Application of the year 2018

  • MyRocks
    MyRocks is now in MariaDB, Percona Server and PolarDB (Alibaba). Intel, MariaDB and Rockset are optimizing it for cloud native storage. It will soon be available for MySQL 8.x. As a bonus, they have engaged with academia to explain how to make tuning easier.
  • ProxySQL
    ProxySQL made an appearence at the awards for a second year in a row, this time for the product as a whole, rather than just its creator, because it is just so popular. It solves serious, real-world problems experienced by DBAs in an elegant way. Here is what some of the nominations said: “ProxySQL is a great alternative for MaxScale and MySQL Proxy. Many companies are using it in production. It is simple, easy to configure, and works reliably!” “great results, frequent updates and support from a committed team”
  • Vitess
    Vitess is a database clustering system for horizontal scaling of MySQL. Originally developed at YouTube/Google and now under CNCF, Vitess is free and open source and aims to solve scaling problems for MySQL installations. Vitess enjoys an active contributing community, runs in production at such companies such as Square and Slack, and being evaluated and adopted by many others. Vitess has a public, active and welcoming Slack channel. “Vitess helps users scale MySQL by making sharding painless. It minimizes changes needed on the application side, and at the same time simplifies the management of shards.”

MySQL Community Awards: Corporate Contributor of the year 2018

  • Alibaba Cloud
    Alibaba Cloud authors AliSQL, a free and open source MySQL branch. They both take patches from various contributors as well as provide Alibaba Cloud’s own at-scale patches for the MySQL server, that are innovative and disrupting.
    Alibaba Cloud has made an investment in MariaDB, that will no doubt help in keeping the MySQL ecosystem competitive and thriving

Committee:
Alexey Kopytov
Baron Schwartz
Bill Karwin
Colin Charles
Daniël van Eeden
Eric Herman
Frédéric Descamps
Justin Swanhart
Mark Leith
Santiago Lertora
Shlomi Noach
Simon Mudd

Co-secretaries:
Agustín Gallego
Emily Slocombe

Thank you to this year’s anonymous sponsor for donating the goblets. Thank you to Colin Charles for acquiring and transporting the goblets. Thank you to  Santiago Lertora for our website!

Here are some more photos from leFred Descamps during the event:

 

Categories: Web Technologies

Grid to Flex

CSS-Tricks - Tue, 04/24/2018 - 14:04

Una Kravets shows how to make layouts in CSS Grid with flexbox fallbacks for browsers that don’t support those grid properties just yet. Una writes:

CSS grid is AMAZING! However, if you need to support users of IE11 and below, or Edge 15 and below, grid won't really work as you expect...This site is a solution for you so you can start to progressively enhance without fear!

The site is a provides examples using common layouts and component patterns, including code snippets. For example:

See the Pen Grid To Flex -- Example 1 by Una Kravets (@una) on CodePen.

Direct Link to ArticlePermalink

The post Grid to Flex appeared first on CSS-Tricks.

Categories: Web Technologies

Percona Live 2018 Sessions: MySQL at Twitter

Planet MySQL - Tue, 04/24/2018 - 13:21

In this Percona Live 2018 blog, we’ll talk with Ronald Francisco, SRE of Database Infrastructure at Twitter about why they moved from a fork of MySQL to MySQL 5.7.

We already started today with a great set of keynote sessions, and now the breakout sessions have begun in earnest. I’ve been looking in on the talks and stopping to talk with some of the presenters.

In this session, Ronald Ramon Francisco (Twitter Inc) SRE, Database Infrastructure presented the motivation for moving from a fork to MySQL to MySQL proper, and why they decided to do it. Twitter has been using their own fork of MySQL for many years. Last year the team decided to migrate to the community version of MySQL 5.7 and abandoned their own version. The road to the community version was full of challenges.

He also discussed the challenges and surprises encountered and how they overcome them. Finally, He looked at lessons learned, recommendations and their future plans.

I got a chance to speak with Ronald after his talk, and ask a few questions.

Check it out below.

The post Percona Live 2018 Sessions: MySQL at Twitter appeared first on Percona Database Performance Blog.

Categories: Web Technologies

MySQL Performance : over 1.8M QPS with 8.0 GA on 2S Skylake !

Planet MySQL - Tue, 04/24/2018 - 12:14

Last year we already published our over 2.1M QPS record with MySQL 8.0 -- it was not yet GA on that moment and the result was obtained on the server with 4CPU Sockets (4S) Intel Broadwell v4. We did not plan any improvement in 8.0 for RO related workloads, and the main target of this test was to ensure there is NO regressions in the results (yet) comparing to MySQL 5.7 (where the main RO improvements were delivered). While for MySQL 8.0 we mostly focused our efforts on lagging WRITE performance in MySQL/InnoDB, and our "target HW" was 2CPU Sockets servers (2S) -- which is probably the most widely used HW configuration for todays MySQL Server deployments..

However, not only SW, but also HW is progressing quickly these days ! -- and one of my biggest surprises last time was about Intel Skylake CPU ;-)) -- the following graph is reflecting the difference between similar 2S servers, where one is having the "old" 44cores-HT Broadwell v4, and another the "new" 48cores-HT Skylake CPUs :


the difference is really impressive, specially when you see that just on 32 users load (when CPU is not at all saturated not on 44cores nor 48cores) there is already 50% gain with Skylake ! (and this is about a pure "response time"), and on peak QPS level it's over 1.8M QPS (not far from 80% gain over Brodawell)..

And this results is marking our next milestone in MySQL RO performance on 2S HW ! ;-))

Sysbench RO Point-Selects 10Mx8-tables latin1 on 2S 48cores-HT Skylake

As already mentioned, the main gain is coming from MySQL 5.7 changes, and we're probably just little bit lucky here to see MySQL 8.0 slightly better than 5.7 ;-)) (while as you can see from the chart, it was also a good reason for MariaDB to move to InnoDB 5.7 to match similar gains comparing to InnoDB 5.6)..

So, it's as expected, to not see any difference on mixed OLTP_RO :

Sysbench OLTP_RO 10Mx8-tables latin1 on 2S 48cores-HT Skylake

and what is amazing here is that we're reaching on 2S Skylake now the 1M QPS result that we obtained in the past with MySQL 5.7 on the same 4S Broadwell v4 box (which was not yet upgraded to 96cores on that time).. And it makes me smile now to recall all the discussions with our users mentioning "they will never use anything bigger than 2S server" -- and here we're ! -- its exactly 2S, 48cores "only" box, but pointing on ALL the problems we already investigated on bigger HW and fixed them ON TIME ! ;-)) -- well, there is still a lot of work ahead, so let's hope we'll be "always on time", let's see..

The most "sensible" RO workload for me was always "distinct ranges" in Sysbench, as it's pointing on issues which anyone can have with "group by" or "distinct" queries.. -- which is involving temp memory tables code, and this code was completely rewritten in 8.0 "from scratch" to be finally more simple, "maintainable" and more scalable. But this is not always meaning "as efficient as before" (the most efficient it could be probably if it was re-written on assembler, but again -- who will want to maintain it again ? ;-))

Sysbench RO Distinct-Ranges 10Mx8-tables latin1 on 2S 48cores-HT Skylake

so far, MySQL 8.0 is doing mostly the same as 5.7 here, and it's really good keeping in mind the impact of the changes.. (while further improvements may still be done, as well many old "broken" cases are solved now, etc.)

(but have nothing to say about MariaDB 10.3.5 here)

well, this was about "latin1" RO results, stay tuned, there is more to come ;-))

And Thank You for using MySQL !

P.S. an "attentive reader" may ask himself -- if over 1.8M QPS are reached on 2S Skylake, what QPS level then will be matched on 4S Skylake ??? -- and I'd say you : we don't care ;-)) we know the result will be higher, but we're NOT running after numbers, and there are other more important problems waiting on us to be resolved, rather to show you "yet more good numbers" ;-))

Rgds,
-Dimitri
Categories: Web Technologies

JAMstack Comments

CSS-Tricks - Tue, 04/24/2018 - 10:30

JAMstack sites are often seen as being static. A more accurate mental model for them would be that they are sites which have the ability to be hosted statically. The difference might seem semantic, but thanks to the rise of many tools and services which simplify running a build and deploying to static hosting infrastructure, such sites can feel much fresher and dynamic than you might imagine, while still capable of being served from static hosting infrastructure, with all the benefits that brings.

A feature often used as an example of why a site cannot be hosted statically is comments. A comments engine needs to handle submissions, allow for moderation, and is by its very nature, "dynamic".

Comment systems are generally thought of as quite dynamic content

Thanks to the growing ecosystem of tools available for JAMstack sites, there are solutions to this. Let's looks at an example which you could use on your own site, which:

  • Does not depend on cleint-side JavaScript
  • Could work with any static site generator
  • Includes moderation
  • Sends notifications when new comments need moderating
  • Bakes the comments into your site, so that they load quickly and appear in searches

This example makes use of some of the features of Netlify, a platform for automating, deploying and hosting web projects, but many of the principles could be used with other platforms.

You can see the example site here.

Stashing our content

We'll create 2 forms to receive all of our comments at the different stages of their journey from commenter to content. When Netlify sees a <form>, it creates a unique data store for the data the form collects. We'll make great use of that.

  • Form 1) A queue to hold all of the new comment submissions. In other words, a store to hold all comments awaiting moderation.
  • Form 2) Contains all approved comments.

The act of moderation will be somebody looking at each new submission and deciding, "yep!" or "nope!". Those that get nope-d will be deleted from the queue. Those that are approved will be posted over to the approved comments form.

All of the comments in the approved comments form are used by our static site generator in subsequent site builds thanks to the API access Netlify gives to the submissions in our forms.

The comment form

Each page includes an HTML <form>. By adding the boolean attribute of netlify to any HTML form element in your site, Netlify will automatically generate an API for your form, and gathers all of the submissions to it for you. You'll also be able to access the submissions via that API later. Handy!

The comments <form> on each page will look a lot like this (some classes and extra copy omitted for clarity):

<form netlify name="comments-queue" action="/thanks"> <input name="path" type="hidden" value="{{ page.url }}"> <p> <label for="name">Your name</label> <input type="text" name="name" id="name"> </p> <p> <label for="email">Your email</label> <input type="email" name="email" id="email"> </p> <p> <label for="comment">Your comment</label> <textarea name="comment" id="comment"></textarea> </p> <p> <button type="submit">Post your comment</button> </p> </form>

You'll may notice that the form also includes a type="hidden" field to let us know which page on the site this comment is for. Our static site generator populates that for us when the site is generated, and well use it later when deciding which comments should be shown on which page.

Submissions and notifications

When a new comment is posted via the comment form, Netlify not only stashes that for us, but can also send a notification. That could be:

  • an email
  • a Slack notification
  • a web hook of our choosing.

These give us the chance to automate things a little.

New submissions result in a notification being posted to Slack. We'll get to see what was submitted and to which page right there in our Slack client.

To make things extra slick, we can massage that notification a little to include some action buttons. One button to delete the comment, one to approve it. Approving a new comment from a Slack notification on your phone while riding the bus feels good.

We can't make those buttons work without running a little bit of logic which, we can do in a Lambda function. Netlify recently added support for Lambda functions too, making the process of writing and deploying Lambdas part of the same deployment process. You'll not need to go rummaging around in Amazon's AWS configuration settings.

We'll use one Lambda function to add some buttons to our Slack notification, and another Lambda function to handle the actions of clicking either of those buttons.

Bring the comments into the site

With a freshly approved comment being posted to our approved comments form, we are back to using the submission event triggers that Netlify provides. Every time something is posted to the approved comments form, we'll want to include it in the site, so we have Netlify automatically rebuild and deploy our site.

Most static site generators have some notion of data files. Jekyll uses files in a [_data] directory, Hugo has a similar data directory. This example is using Eleventy as its static site generator which has a similar concept. We'll make use of this.

Each time we run our site build, whether in our local development environment or within Netlify through their automated builds, the first step is to pull all of our external data into local data files which our SSG can then use. We do that with a Gulp task.

Armed with a `comments.json` file which we have populated from a call to Netlify's form submission API which grabbed all of our approved comments, our SSG can now build the site as normal and bake the correct comments right into the HTML of our pages.

Some benefits

Well that was fun, but why bother?

Yes, you could use something like Disqus to add comments to a statically hosted site via a few lines of JavaScript. But that adds an external JavaScript dependency and results in the comments about your content living far away from the content itself. And it's only available once the JavaScript has loaded, pulled in the data and then injected it into your site.

Whereas this method results in comments that are baked right into the content of your site. They'll show up in searches on Google and they will load as part of your site without the need for any client-side JavaScript.

Even better, they'll be available for you to commit right into your code repository with the rest of your content, giving you more peace of mind and portability in the future.

The example site and all of its code are available to explore. You can try submitting comments if you like (although poor old Phil will need to moderate any comments on this example site before they appear, but that will just make him feel loved).

Better still, you can clone this example and deploy your own version to Netlify with just a few clicks. The example site explains how.

Just show me behind the scenes right now!

If you'd want to take a look at how things behave for the moderator of a site using this system without grabbing a copy of your own, this short video will walk through a comment being made, moderated and incorporated into the site.

The post JAMstack Comments appeared first on CSS-Tricks.

Categories: Web Technologies

Announcing npm@6

Echo JS - Tue, 04/24/2018 - 09:58
Categories: Web Technologies

Taking advantage of new transaction length metadata

Planet MySQL - Tue, 04/24/2018 - 07:47

MySQL 8.0.2 introduced a small yet powerful per transaction metadata information containing the transaction length in binary logs.

MySQL binary logs are being used for many other things than MySQL replication or backup/recovery: replicate to Hadoop; replicate to other databases, such as Oracle; capture data change (CDC) and extract-transform-load (ETL); record change notification for cache invalidation; change tracking for differential backups; etc.…

Categories: Web Technologies

Server-Side Visualization With Nightmare

CSS-Tricks - Tue, 04/24/2018 - 06:36

This is an extract from chapter 11 of Ashley Davis’s book Data Wrangling with JavaScript now available on the Manning Early Access Program. I absolutely love this idea as there is so much data visualization stuff on the web that relies on fully functioning client side JavaScript and potentially more API calls. It’s not nearly as robust, accessible, or syndicatable as it could be. If you bring that data visualization back to the server, you can bring progressive enhancement to the party. All example code and data can be found on GitHub.

When doing exploratory coding or data analysis in Node.js it is very useful to be able to render a visualization from our data. If we were working in browser-based JavaScript we could choose any one of the many charting, graphics, and visualization libraries. Unfortunately, under Node.js, we don’t have any viable options, so how otherwise can we achieve this?

We could try something like faking the DOM under Node.js, but I found a better way. We can make our browser-based visualization libraries work for us under Node.js using a headless browser. This is a browser that has no user interface. You can think of it as a browser that is invisible.

I use Nightmare under Node.js to capture visualizations to PNG and PDF files and it works really well!

The headless browser

When we think of a web-browser we usually think of the graphical software that we interact with on a day to day basis when browsing the web. Normally we interact with such a browser directly, viewing it with our eyes and controlling it with our mouse and keyboard as shown in Figure 1.

Figure 1: The normal state of affairs: our visualization renders in a browser and the user interacts directly with the browser

A headless browser on the other hand is a web-browser that has no graphical user interface and no direct means for us to control it. You might ask what is the use of a browser that we can’t directly see or interact with.

Well, as developers we would typically use a headless browser for automating and testing web sites. Let’s say that you have created a web page and you want to run a suite of automated tests against it to prove that it works as expected. The test suite is automated, which means it is controlled from code and this means that we need to drive the browser from code.

We use a headless browser for automated testing because we don’t need to directly see or interact with the web page that is being tested. Viewing such an automated test in progress is unnecessary, all we need to know is if the test passed or failed — and if it failed we would like to know why. Indeed, having a GUI for the browser under test would actually be a hindrance for a continuous-integration or continuous-deployment server, where many such tests can run in parallel.

So headless browsers are often used for automated testing of our web pages, but they are also incredibly useful for capturing browser-based visualizations and outputting them to PNG images or PDF files. To make this work we need a web server and a visualization, we must then write code to instance a headless browser and point it at our web server. Our code then instructs the headless browser to take a screenshot of the web page and save it to our file system as a PNG or PDF file.

Figure 2: We can use a headless browser under Node.js to capture our visualization to a static image file

Nightmare is my headless browser of choice. It is a Node.js library (installed via npm) that is built on Electron. Electron is a framework normally used for building cross-platform desktop apps that are based on web-technologies.

Why Nightmare?

It’s called Nightmare, but it’s definitely not a Nightmare to use. In fact, it’s the simplest and most convenient headless browser that I’ve used. It automatically includes Electron, so to get started we simply install Nightmare into our Node.js project as follows:

npm install --save nightmare

That’s all we need to install Nightmare and we can start using it immediately from JavaScript!

Nightmare comes with almost everything we need: A scripting library with an embedded headless browser. It also includes the communication mechanism to control the headless browser from Node.js. For the most part it’s seamless and well-integrated to Node.js.

Electron is built on Node.js and Chromium and maintained by GitHub and is the basis for a number of popular desktop applications.

Here are the reasons that I choose to use Nightmare over any other headless browser:

  • Electron is very stable.
  • Electron has good performance.
  • The API is simple and easy to learn.
  • There is no complicated configuration (just start using it).
  • It is very well integrated with Node.js.
Nightmare and Electron

When you install Nightmare via npm it automatically comes with an embedded version of Electron. So, we can say that Nightmare is not just a library for controlling a headless browser, it effectively is the headless browser. This is another reason I like Nightmare. With some of the other headless browsers, the control library is separate, or it’s worse than that and they don’t have a Node.js control library at all. In the worst case, you have to roll your own communication mechanism to control the headless browser.

Nightmare creates an instance of the Electron process using the Node.js child_process module. It then uses inter-process communication and a custom protocol to control the Electron instance. The relationship is shown in Figure 3.

Figure 3: Nightmare allows us to control Electron running as a headless browser Our process: Capturing visualizations with Nightmare

So what is the process of capturing a visualization to an image file? This is what we are aiming at:

  1. Acquire data.
  2. Start a local web server to host our visualization
  3. Inject our data into the web server
  4. Instance a headless browser and point it at our local web server
  5. Wait for the visualization to be displayed
  6. Capture a screenshot of the visualization to an image file
  7. Shutdown the headless browser
  8. Shutdown the local web server
Prepare a visualization to render

The first thing we need is to have a visualization. Figure 4 shows the chart we’ll work with. This a chart of New York City yearly average temperature for the past 200 years.

Figure 4: Average yearly temperature in New York City for the past 200 years

To run this code you need Node.js installed. For this first example we’ll also use live-server (any web server will do) to test the visualization (because we haven’t created our Node.js web server yet), install live server as follows:

npm install -g live-server

Then you can clone the example code repo for this blog post:

git clone https://github.com/Data-Wrangling-with-JavaScript/nodejs-visualization-example

Now go into the repo, install dependencies and run the example using live-server

cd nodejs-visualization-example/basic-visualization bower install live-server

When you run live-server your browser should automatically open and you should see the chart from Figure 4.

It’s a good idea to check that your visualization works directly in a browser before you try and capture it in a headless browser; there could easily be something wrong with it and problems are much easier to troubleshoot in a real browser than in the headless browser. live-server has live reload built-in, so now you have a nice little setup here when you can edit and improve the chart interactively before you try to capture it under Node.js.

This simple line chart was constructed with C3. Please take a look over the example code and maybe look at some of the examples in the C3 gallery to learn more about C3.

Starting the web server

To host our visualization, we need a web server. It’s not quite enough that we have a web server, we also need to be able to dynamically start and stop it. Listing 1 shows the code for our web server.

Listing 1 – Code for a simple web server that can be started and stopped const express = require('express'); const path = require('path'); module.exports = { start: () => { // Export a start function so we can start the web server on demand. return new Promise((resolve, reject) => { const app = express(); const staticFilesPath = path.join(__dirname, "public"); // Make our 'public' sub-directory accessible via HTTP. const staticFilesMiddleWare = express.static(staticFilesPath); app.use('/', staticFilesMiddleWare); const server = app.listen(3000, err => { // Start the web server! if (err) { reject(err); // Error occurred while starting web server. } else { resolve(server); // Web server started ok. } }); }); } }

The code module in listing 1 exports a start function that we can call to kickstart our web server. This technique, being able to start and stop our web server, is also very useful for doing automated integration testing on a web site. Imagine that you want to start your web server, run some tests against it and then stop it at the end.

So now we have our browser-based visualization and we have a web server that can be started and stopped on demand. These are the raw ingredients we need for capturing server-side visualizations. Let’s mix it up with Nightmare!

Rendering the web page to an image

Now let’s flesh out the code to capture a screenshot of the visualization with Nightmare. Listing 2 shows the code that instances Nightmare, points it at our web server and then takes the screenshot.

Listing 2 – Capturing our chart to an image file using Nightmare const webServer = require('./web-server.js'); const Nightmare = require('nightmare'); webServer.start() // Start the web server. .then(server => { const outputImagePath = "./output/nyc-temperatures.png"; const nightmare = new Nightmare(); // Create the Nightmare instance. return nightmare.goto("http://localhost:3000") // Point the browser at the web server we just started. .wait("svg") // Wait until the chart appears on screen. .screenshot(outputImagePath) // Capture a screenshot to an image file. .end() // End the Nightmare session. Any queued operations are completed and the headless browser is terminated. .then(() => server.close()); // Stop the web server when we are done. }) .then(() => { console.log("All done :)"); }) .catch(err => { console.error("Something went wrong :("); console.error(err); });

Note the use of the goto function, this is what actually directs the browser to load our visualization.

Web pages usually take some time to load. That’s probably not going to be very long, especially as we are running a local web server, but still we face the danger of taking a screenshot of the headless browser before or during its initial paint. That’s why we must call the wait function to wait until the chart’s <svg> element appears in the browser’s DOM before we call the screenshot function.

Eventually, the end function is called. Up until now we have effectively built a list of commands to send to the headless browser. The end function actually sends the commands to the browser, which takes the screenshot and outputs the file nyc-temperatures.png. After the image file has been captured we finish up by shutting down the web server.

You can find the completed code under the capture-visualization sub-directory in the repo. Go into the sub-directory and install dependencies:

cd nodejs-visualization-example/capture-visualization cd public bower install cd .. npm install live-server

Now you can try the code for yourself:

node index.js

This has been an extract from chapter 11 of Data Wrangling with JavaScript now available on the Manning Early Access Program. Please use this discount code fccdavis3 for a 37% discount. Please check The Data Wrangler for new updates on the book.

The post Server-Side Visualization With Nightmare appeared first on CSS-Tricks.

Categories: Web Technologies

Percona Server for MySQL 5.7.21-21 Is Now Available with Increased Built-In Security Enhancements

Planet MySQL - Tue, 04/24/2018 - 04:59

Percona announces the GA release of Percona Server for MySQL 5.7.21-21 on on April 24, 2018. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

This version of Percona Server for MySQL 5.7.21 includes three new encryption features – Vault keyring plug-in, encryption for InnoDB general tablespaces, and encryption for binary log files.

These new capabilities, which allow companies to immediately increase security for their existing databases, are also part of a larger project to build complete, robust, enterprise-grade encryption capabilities into Percona Server for MySQL, allowing customers and the community to satisfy their most rigorous security compliance requirements. Percona also announced the release of a new version of Percona XtraBackup that supports backing up Percona Server for MySQL instances that have these encryption features enabled.

Based on MySQL 5.7.21, including all the bug fixes in it, Percona Server for MySQL 5.7.21-21 is the current GA release in the Percona Server for MySQL 5.7 series. Percona provides completely open-source and free software.

New Features:
  • A new variable innodb_temp_tablespace_encrypt is introduced to turn encryption of temporary tablespace and temporary InnoDB file-per-table tablespaces on/off. Bug fixed #3821.
  • A new variable innodb_encrypt_online_alter_logs simultaneously turns on encryption of files used by InnoDB for merge sort, online DDL logs, and temporary tables created by InnoDB for online DDL. Bug fixed #3819.
  • A new variable innodb_encrypt_tables can be set to ON, making InnoDB tables encrypted by default, to FORCE, disabling creation of unencrypted tables, or OFF, restoring the like-before behavior. Bug fixed #1525.
  • Query response time plugin now can be disabled at session level with use of a new variable query_response_time_session_stats.
Bugs Fixed:
  • Attempting to use a partially-installed query response time plugin could have caused server crash. Bug fixed #3959.
  • There was a server crash caused by a materialized temporary table from semi-join optimization with key length larger than 1000 bytes. Bug fixed #296.
  • A regression in the original 5.7 port was causing integer overflow with thread_pool_stall_limit variable values bigger than 2 seconds. Bug fixed #1095.
  • A memory leak took place in Percona Server when performance schema is used in conjunction with thread pooling. Bug fixed #1096.
  • A code clean-up was done to fix compilation with clang, both general warnings (bug fixed #3814, upstream #89646) and clang 6 specific warnings and errors (bug fixed #3893, upstream #90111).
  • Compilation warning was fixed for -DWITH_QUERY_RESPONSE_TIME=ON CMake compilation option, which makes QRT to be linked statically. Bug fixed #3841.
  • Percona Server returned empty result for SELECT query if number of connections exceeded 65535. Bug fixed #314 (upstream #89313).
  • A clean-up in Percona Server binlog-related code was made to avoid uninitialized memory comparison. Bug fixed #3925 (upstream #90238).
  • mysqldump utility with --innodb-optimize-keys option was incorrectly working with foreign keys on the same table, producing invalid SQL statements. Bugs fixed #1125 and #3863.
  • A fix of the mysqld startup script failed to detect jemalloc library location for preloading, thus not starting on systemd based machines, introduced in Percona Server 5.7.21-20, was improved to take into account previously created configuration file. Bug fixed #3850.
  • The possibility of a truncated bitmap file name was fixed in InnoDB logging subsystem. Bug fixed #3926.
  • Temporary file I/O was not instrumented for Performance Schema. Bug fixed #3937 (upstream #90264).
  • A crash in the unsafe query warning checks with views took place for UPDATE statement in case of statement binlogging format. Bug fixed #290.
MyRocks Changes:
  • A re-implemented variable rpl_skip_tx_api allows to turn on simple RocksDB write batches functionality, increasing replication performance by the transaction api skip. Bug fixed MYR-47.
  • Decoding value-less padded varchar fields could under some circumstances cause assertion and/or data corruption. Bug fixed MYR-232.
TokuDB Changes:
  • Two new variables introduced for the TokuDB fast updates feature, tokudb_enable_fast_update and tokudb_enable_fast_upsert should be now used instead of the NOAR keyword, which is now optional at compile time and off by default. Bugs fixed #63 and #148.
  • A set of compilation fixes was introduced to make TokuDB successfully build in MySQL / Percona Server 8.0. Bugs fixed #84, #85, #114, #115, #118, #128, #139, #141, and #172.
  • Conditional compilation code dependent on version ID in the TokuDB tree was separated and arranged to specific version branches. Bugs fixed #133, #134, #135, and #136.
  • ALTER TABLE ... COMMENT = ... statement caused TokuDB to rebuild the whole table, which is not needed, as only FRM metadata should be changed. Bug fixed #130, and #137.
  • Data race on the cache table pair attributes was fixed.

Other bugs fixed: #3793, #3812, #3813, #3815, #3818, #3835, #3875 (upstream #89916), #3843 (upstream #89822), #3848, #3856, #3887, MYR-160, MYR-245, #109, #111,#180, #181, #182, and #188.

The release notes for Percona Server for MySQL 5.7.21-20 are available in the online documentation. Please report any bugs on the project bug tracking system.

The post Percona Server for MySQL 5.7.21-21 Is Now Available with Increased Built-In Security Enhancements appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Pages

1 2 3 4 5 6 7 8 9 next › last »