Simple Mahjong Titans Entertainment

Mahjong is a skill game of Oriental origin which was originally played between four players with a set of 15 tiles. It also has different symbols and characters with scoring and winning rules that varied. Today, the game is enjoyed all over the world and online Mahjong offers a convenient form of entertainment through desktop and mobile digital gadgets. The rules Vary with different mahjong games available. These include Titans, Classic, and Solitaire, Connect types, Alchemy, Looney Tunes, Empire and Dream pet Mahjong games. Mahjong Titans is the most simple to understand and  mahjong mobile play .

Unlike the traditional Chinese Mahjong which was played by up to four people, this can be played online by one player and is applicable on windows Vista. It requires that you choose from six figures of a cat, dragon, crab, mediapost.com turtle, fortress and spider and these symbolize the different difficulty levels. By matching the open or exposed tiles, you have to match all tiles until there is no pair left to match.

Simple Rules for Mahjong Titans game

With this game, you simply follow the rules and think smart. Simple step by step rules that you can follow are listed online and are summarized below:
• By selecting the tile you want to move, match with the correct tile of the same lay out
• Keep matching the tile pairs that match
• As you keep matching the tiles, you keep earning points
• For more points, find matched pairs in a row

Since this is a mind puzzle like other card games, smart thinking will make you the winner. For easier hints on the likely pairs to match, press H and for more assistance click on the help button to get more hints.
Online Mahjong Titans is convenient because you can save your record for later continuation. This means you can accumulate points at your own pace for more bonuses. Modern technology also allows you to customize your details with sound and animation settings to give you more fun in the game.

mahjong mobile free download

Enjoy Every Game-Play Games Online

Online games

Like all things, online games have their advantages as well as their disadvantages. They can have more or less consequences in the life of the players, serious or less serious according to the extent that it will take in their lives. The benefits of online games are as the followings:

– Online games are cheaper than games for PlayStation or Xbox consoles.
– There is nothing to download or install for online games.
– The choice is great for game online and we are not restricted by the console model like PlayStation or Xbox.
– We can keep in touch with our friends during an online game.
– Online games are compatible with many systems (PC, Mac, mobile, tablets, etc.).
– By playing online games on our mobiles, we get used to the device and we learn to control it.
– Online games can be played at work or in other places.
– We can easily hide the fact that we play games online.

Just play free download game online and get the benefits!

https://www.playstation.com/en-us/explore/games/free-to-play/

https://www.myrealgames.com/genres/free-games-download/

 

Free Online Math Games | Math Playground

Education-themed games

The game is an extremely important moment in the life of any infant and is the fundamental tool used by children to assimilate the world around them. The didactic or educational math games serve as a stimulus for the development of their basic skills, so not only manage to develop new skills through these activities but also develop their cognitive, creative and social interaction skills. As we mentioned before, educational games allow children to develop their cognitive, creative and even social skills, in addition to providing them with tools to assimilate in the best possible way the world that surrounds them and all there is to learn. Children love challenges, so it will not be difficult for you to encourage your child to participate in a didactic game. From video games to table games, there is a wide range of educational activities options to put into practice. If you want to try, just play free download game math!

https://www.playstation.com/en-us/explore/games/free-to-play/

http://www.mathgametime.com/math-games

PS4 games | The best PS4 games out now and coming soon

PS4

The PlayStation 4 has several advantages compared to the PlayStation 3. Most of these advantages are related to the possibility of playing within a social network. But also, it is true that it also has a number of disadvantages that are minor compared to the virtues that the console offers. All these advantages and disadvantages of the PlayStation 4, will give us a better view to know if it is a good idea or not to buy the PlayStation 4. Advantages of the PlayStation 4:

– It has an optional 4K output resolution. This is the latest resolution technology in the display market.
– It is possible to access a social network within the PlayStation 4 that has several virtues.
– While one plays the PlayStation 4, it is possible to share the game and have someone else from this social network see it in the foreground, commenting on what you think.
– Also, while someone sees the ps4 games, you can put the game, that is, we leave the control of the game to the person in the social network. This is a clear advantage offered by the PlayStation 4.
– Another advantage is that the DualShock 4 has an incredible design, and has a touch screen with 6 buttons with different functions.

Get free download game PS4 right now!

https://www.playstation.com/en-us/explore/games/free-to-play/

https://www.playstation.com/en-us/explore/games/free-to-play/

free download game mahjong

Mahjong

Mahjong is a board game that requires skill, intelligence, calculation and luck, and is usually played by four people. Born in China, probably in the 19th century, it is now widespread in the rest of the world, especially in the United States and Japan. The name literally means “hemp bird” or “hemp sparrow” and consists of a set of cards, which are chips, which have some similarities to some western card games. Players earn points by creating appropriate combinations of cards. Mahjong is played with chips similar to dominoes, but conceptually similar to our cards. They are played with 144 cards of which 108 cards are “numeric”, they are divided into three classes (circles, bamboo and letters) and show the values ​​from one to nine. Interested? Get free download game mahjong here!

https://www.playstation.com/en-us/explore/games/free-to-play/ 

https://www.mahjong.com/Mahjong+Downloads

free download game Xbox 360

Xbox 360

The main advantage of the xbox 360 games is its affordable price (compared to its competitors). The price of an Xbox 360 Arcade is only slightly more expensive than the Nintendo Wii. To get the experience of the next gen console, you don’t have to pay as much as PS3. The main reason is the Xbox 360 applies a DVD Drive that is cheaper than PS3’s Bluray Drive. This strategy is powerful enough to attract many gamers. By buying Xbox Premium, you will get a console, a 60GB hard drive, a chat headset, a 10-meter LAN cable, a component cable, Xbox Live Gold for one month, and a wireless controller. On the PS3, you will not get a headset, LAN cable, and component cable. With Premium Pack, you will immediately be ready to enter the world of game consoles plus its online features, even without the Wi-Fi feature like on the PS3. You can get free download game Xbox 360 here!

https://www.playstation.com/en-us/explore/games/free-to-play/

https://downloadgamexbox.com/category/iso/

What are Crawlability and Indexability of a Website?

What are Crawlability and Indexability of a Website?

Tell me, what’s the first thing that comes to your mind when you think about ranking a website?

Content? Or maybe backlinks?

I admit, both are crucial factors for positioning a website in search results. But they’re not the only ones.

In fact, two other factors play a significant role in SEO – crawlability and indexability. Yet, most website owners have never heard of them.

At the same time, even small problems with indexability or crawlability could result in your site losing its rankings. And that’s regardless of what great content or how many backlinks you have.

How Do Web Crawlers Scan Your Site?

A new tool to understand how search engine bots work

What are crawlability and indexability?

To understand these terms, let’s start by taking a look at how search engines discover and index pages. To learn about any new (or updated) page, they use what’s known as web crawlers, bots whose aim is to follow links on the web with a single goal in mind:

To find, and index new web content.

Matt Cutts, formerly of Google, posted an interesting video explaining the process in detail. You can watch it below:

In short, both of these terms relate to the ability of a search engine to access and index pages on a website to add them to its index.

Crawlability describes the search engine’s ability to access and crawl content on a page.

If a site has no crawlability issues, then web crawlers can access all its content easily by following links between pages.

However, broken links or dead ends might result in crawlability issues – the search engine’s inability to access specific content on a site.

Indexability, on the other hand, refers to the search engine’s ability to analyze and add a page to its index.

Even though Google could crawl a site, it may not necessarily be able to index all its pages, typically due to indexability issues.

What affects crawlability and indexability?

1. Site Structure

The informational structure of the website plays a crucial role in its crawlability.

For example, if your site features pages that aren’t linked to from anywhere else, web crawlers may have difficulty accessing them.

Of course, they could still find those pages through external links, providing that someone references them in their content. But on the whole, a weak structure could cause crawlability issues.

2. Internal Link Structure

A web crawler travels through the web by following links, just like you would have on any website. And therefore, it can only find pages that you link to from other content.

A good internal link structure, therefore, will allow it to quickly reach even those pages deep in your site’s structure. A poor structure, however, might send it to a dead end, resulting in a web crawler missing some of your content.

3. Looped Redirects

Broken page redirects would stop a web crawler in its tracks, resulting in crawlability issues.

4. Server Errors

Similarly, broken server redirects and many other server-related problems may prevent web crawlers from accessing all of your content.

5. Unsupported Scripts and Other Technology Factors

Crawlability issues may also arise as a result of the technology you use on the site. For example, since crawlers can’t follow forms, gating content behind a form will result in crawlability issues.

Various scripts like Javascript or Ajax may block content from web crawlers as well.

6. Blocking Web Crawler Access

Finally, you can deliberately block web crawlers from indexing pages on your site.

And there are some good reasons for doing this.

For example, you may have created a page you want to restrict public access to. And as part of preventing that access, you should also block it from the search engines.

However, it’s easy to block other pages by mistake too. A simple error in the code, for example, could block the entire section of the site.

The whole list of crawlability issues you can find in this article – 18 Reasons Your Website is Crawler-Unfriendly: Guide to Crawlability Issues.

How to make a website easier to crawl and index?

I’ve already listed some of the factors that could result in your site experiencing crawlability or indexability issues. And so, as a first step, you should ensure they don’t happen.

But there also other things you could do to make sure web crawlers can easily access and index your pages.

1. Submit Sitemap to Google

Sitemap is a small file, residing in the root folder of your domain, that contains direct links to every page on your site and submits them to the search engine using the Google Console.

The sitemap will tell Google about your content and alert it to any updates you’ve made to it.

2. Strengthen Internal Links

We’ve already talked about how interlinking affects crawlability. And so, to increase the chances of Google’s crawler finding all the content on your site, improve links between pages to ensure that all content is connected.

3. Regularly update and add new content

Content is the most important part of your site. It helps you attract visitors, introduce your business to them, and convert them into clients.

But content also helps you improve your site’s crawlability. For one, web crawlers visit sites that constantly update their content more often. And this means that they’ll crawl and index your page much quicker.

4. Avoid duplicating any content

Having duplicate content, pages that feature the same or very similar content can result in losing rankings.

But duplicate content can also decrease the frequency with which crawlers visit your site.

So, inspect and fix any duplicate content issues on the site.

5. Speed up your page load time

Web crawlers typically have only a limited time they can spend crawling and indexing your site. This is known as the crawl budget. And basically, they’ll leave your site once that time is up.

So, the quicker your pages load, the more of them a crawler will be able to visit before they run out of time.

Tools for managing crawlability and indexability

If all of the above sounds intimidating, don’t worry. There are tools that can help you to identify and fix your crawlability and indexability issues.

Log File Analyzer

Log File Analyzer will show you how desktop and mobile Google bots crawl your site, and if there are any errors to fix and crawl budget to save. All you have to do is upload the access.log file of your website, and let the tool do its job.

An access log is a list of all requests that people or bots have sent to your site; the analysis of a log file allows you to track and understand the behavior of crawl bots.

Read our manual on Where to Find the Access Log File.

Analyze and manage Googlebots

Improve your website’s crawlability and indexability

Site Audit

Site Audit is a part of the SEMrush suite that checks the health of your website. Scan your site for various errors and issues, including the ones that affect a website’s crawlability and indexability.

SEMrush Site Audit

Google tools

Google Search Console helps you monitor and maintain your site in Google. It’s a place for submitting your sitemap, and it shows the web crawlers’ coverage of your site.

Google PageSpeed Insights allows you to quickly check a website’s page loading speed.

Conclusion

Most webmasters know that to rank a website, they at least need strong and relevant content and backlinks that increase their websites’ authority.

What they don’t know is that their efforts are in vain if search engines’ crawlers can’t crawl and index their sites.

That’s why, apart from focusing on adding and optimizing pages for relevant keywords, and building links, you should constantly monitor whether web crawlers can access your site and report what they find to the search engine.

https://www.semrush.com/blog/what-are-crawlability-and-indexability-of-a-website/

How to Protect a #1 Ranking When Your Pesky Competitors Are Trying to Take You Down

How to Protect a #1 Ranking (When Your Pesky Competitors Are Trying to Take You Down)

You finally did it.

You reached the pinnacle of SEO—the coveted #1 position. Time to rest on your laurels and watch the traffic roll in each and every month, right?

Yes, if you are okay with your competitors working feverishly to steal what you have worked so hard for.Because here is the truth: they are coming after you and will take your spot if you don’t do anything about it.

So, what can you do about it?

After running experiments over the past 6 months trying to protect high-traffic #1 positions for clients, I created a process that has kept me several steps ahead of my competitors.

The Top-Spot Security Plan: 3 Steps to Securing Your #1 Ranking

Note: Technical SEO also plays a huge role in protecting a #1 ranking (and your rankings in general). This guide goes over the content side of things, but you also need to make sure your site is a well-oiled machine.

1. Make your content even more link worthy.

Duh, you need to keep your content updated and look for ways to improve it.

But there are a few specific ways to do this in the context of keeping your top position — that also help you grab even more keywords with the same post.

Part 1: Analyze your competitors’ content and beat them at their own game.

Your content is already way better than everyone else’s…obviously. But that doesn’t mean the content you are competing with is complete garbage.

After all, content needs to be pretty good to hit the first page in most situations, right?

I recommend reading through every article on the first page for your keyword and asking questions like:

  • Do they give great examples?
  • Do they have awesome, well-explained tactics and strategies?
  • Is their post visually appealing?
  • Do they have really helpful graphs and diagrams?
  • Did they do their research? Do they make a valid argument?
  • What specific things are people talking about in the comments? What do they love? What are they arguing about?
  • How can I take this to the next level?

Your content is already ranking #1 because it is the best article on the internet for your target keyword (and your links obviously have an impact as well).

But that doesn’t mean it always will be the best if you never come back to it; you have to keep looking for ways to improve, refresh, and revamp your content to stay at the top.

Part 2: Add new sections to your content based on Google Search Console impressions.

As SEOs know, Google normally doesn’t like helping us out, but this little gift from Heaven is an exception.

Google Search Console essentially tells you which keywords you should optimize your content for.

Here is how you can utilize this:

1. Head to GSC and click on performance.

performance-report.png

2. Add a page filter containing your top-ranking post’s URL.

ctr-in-performance-report-2.png

ctr-in-performance-report-3.png

3. Click the columns for Clicks and Impressions at the top, then click Queries.

ctr-3.png

4. Sort by impressions and look for keywords that get tons of impressions but few clicks (thus making CTR low).

ctr-4.png

5. Use these long-tail keywords for further optimization, as well as ideas for new sections in your post.

Sometimes you can rank for one of these bad boys just by putting it in a heading or using it a few times in the body.

2. Keep building links and promoting your content as usual (but be sure to include these 3 crucial tactics).

Nothing will guarantee the death of your #1 spot faster than stopping all link building and promotion once a post hits #1. Yeah, you need to shift the majority of your focus on boosting other posts. But that doesn’t mean you should forget about the post that is getting you boatloads of leads and cash. Where is the sense in that?

Outside of your standard link building tactics, there are a few things you need to do specifically for top-ranking content:

1. Spy on your competitors’ every move with the SEMrush backlink explorer.

Here is a fantastic strategy.

Like we have talked about, your competitors are frantically building links to catch up to you. The funny thing is, you can watch pretty much every move they make by using the SEMrush backlink tool you already use on a daily basis, but with a significant difference.

You know the drill.

1. Enter a competing article’s URL into the search bar.

2. Click backlinks on the left-hand menu, then click the backlinks tab.

domain-research-1.png

3. Here is the important part — check “New” and only look at recently gained backlinks (because you have likely already gone through all the links they previously had).

domain-research-2.png

Do this for every competing article under you on the first page.

Every time they get a new high-quality link, add it to a spreadsheet and reach out to the same site. I do this every Friday so I can quickly act on link opportunities my competitors unknowingly show me. When they get really excited about a great link they just got, I go take it from them. 🙂

2. Set Up Brand Monitoring to find new link and promotion opportunities.

SEMrush’s Brand Monitoring tool is fantastic for many reasons, but I love it mainly because it is an invaluable tool for holding onto a top position.

It helps you:

  1. Find mentions of specific keywords on other sites that are directly related to your content so you can reach out to them for a link.
  2. Find people who shared content containing the specific keywords from above so you can reach out to them for shares.

These prospects are much more likely to link to and share your content because they have already shown their interest in your content’s topic.

I have found that my outreach emails convert 2x – 3x better when reaching out to people from this report. Here is how you can set this up to protect your #1 ranking:

1. Head to the Brand Monitoring tool and create a new project with your top ranking post’s URL and title.

brand-monitoring-1.png

brand-monitoring-2.png

2. Add your post’s main keywords and click “Start Tracking”.

brand-mentions-3.png

3. Look through the Mentions tab to find web pages and Twitter accounts that are likely to help you promote your content.

brand-mentions-4.png

4. Reach out to every new site/person you find who mentions your content’s topic.

I recommend doing this once each week (along with the new links strategy).

3. Keep an eye on the SERPs for new threats and take their links.

It is obvious that you want to monitor the ranking changes right under you, but it is also essential to look for new, potential threats on the whole first page. I am not talking about looking at your ranking reports every day.

I am talking about literally typing your keyword into Google and checking the whole first page for changes.

Is there a recently published article that just reached the bottom of the first page? Is it from an authoritative site? Uh oh. Better keep an eye out there.

It seems like common sense, but taking your eye off the SERPs for even a few days can come back to bite you. Real life example:

Earlier this year, one of my client’s #1 ranking posts got jumped. I saw it drop to #2 in my reports. Crap.

It turns out, a brand new post was gaining steam and went all the way from the bottom of page 1 to the #1 ranking within about 72 hours.

If I had been paying attention to the rankings on the entire first page, I could have prevented it.

(In case you are wondering: I believe it was mainly a CTR issue. I made a way more compelling SEO Title and Meta Description through some CTR experiments, and it went back to #1…and is still there.)

Moral of the story: watch the first page like a hawk.

Whenever you see a new post hit the first page:

  1. Look through their backlink profile and take their links.
  2. Add them to your weekly list of posts to monitor for new links.

3. Run experiments to optimize your organic Click-Through-Rate.

As you already know, CTR is an important ranking factor when you reach the top half of the first page. If an article under yours has a higher CTR, you are at risk of getting jumped. A “safe” CTR for a #1 ranking is between 20% – 30%, so I recommend running experiments with your SEO Title and Meta Description combo until you reach 30%.

Here is my process for running CTR experiments:

Step 1: Take a look at your post’s CTR over the last 3 months.

Here is a quick refresher on how to find CTR in Google Search Console.

Just click on performance, then average CTR.

ctr-in-performance-report-1.png

Filter the report to show CTR’s for keywords your top-ranking posts show up for.

To do this, add a page filter with your post’s URL.

ctr-in-performance-report-2.png

ctr-in-performance-report-3.png

Then, click back to queries, and you’ll see all the keywords your post shows up for (with the CTR for each).

gsc-tutorial-2.png

Step 2: Write a new SEO Title OR Meta Description.

When doing a CTR experiment, it is super important to only change one thing at a time. Why?

If you change both your SEO Title and Meta Description, you won’t be entirely sure how either specifically affected CTR. So, I recommend starting with SEO Title experiments, then moving to the Meta Description once you’ve found the most clickable title.

(As far as best practices for crafting CTR-optimized titles and descriptions, I recommend Brian Dean’s CTR Magnet Method.)

Step 3: Set a time frame for your experiment and leave it be.

The right time frame for your experiments will vary depending on how much traffic your #1 ranking post drives.

If its main keywords drive 1,000’s of visitors per day, 2-3 days will give you enough data.

If the traffic is more like 50 – 100 per day, you might want to wait 7 – 14 days to analyze your results.

Whatever time frame you decide on, be sure to do CTR experiments regularly until you have reached that 30% range.

Step 4: Analyze the results in Google Search Console.

To compare the CTR between two date ranges in Google Search Console, just click the date filter, then “Compare”, then enter the previous date range that corresponds with the time frame of your experiment.

ctr-comparison.png

ctr-comparison-2.png

For example:

If I did a 28-day test, I would want to compare CTR vs. the 28 days before I implemented the changes.

Your Work Isn’t Done Once You Hit #1

Like that rhyme? 🙂

Don’t make the mistake most people make: ignoring their posts once they hit #1. You can definitely shift your focus to bringing up other rankings, but if you don’t take time to protect your top spot, someone will eventually take it.

And that really sucks. These 3 strategies are really effective, don’t take much time to implement, and can keep you from being usurped.

Do you use other strategies to keep your #1 ranking? I would love to hear them in the comments.

https://www.semrush.com/blog/how-to-protect-your-number-one-ranking/

Free Download Games PlayStation 3

PlayStation 3 – The Free Things You Get When You Purchase One

When the name Sony is mentioned, people will ordinarily think about electronic appliances and gaming consoles. This is because Sony is one of the leading makers of gaming consoles in the industry. Sony is recognized to produce one of the greatest gaming consoles in the marketplace called the PlayStation.

As many people know,playstation 3 is a worldwide success that changed the way people watch gaming consoles. This paved the way to another gaming console that Sony produced, the PlayStation 2. Like its predecessor, PlayStation 2 was also widely accepted throughout the world and upon the release of this game console, PlayStation 2 retail establishments continually ran out of PlayStation 2 gaming console stocks.

Currently, after 6 long years of waiting for Sony to develop another gaming console that offers great quality entertainment, Sony will be releasing their latest addition to their gaming console creases, the PlayStation 3. PlayStation 3 is considered to be one of the most anticipated gaming consoles presently. email.cafemom.com  With the different features that you are able to benefit from and a large number of freebies you can be provided with from purchasing PlayStation 3, you will surely want one for your own personal.

When you obtain a PlayStation 3, it will include a radio controlled Bluetooth controller, and a totally free game. The basic configuration will now incorporate the HDMI feature for you to fully capitalize of your PlayStation 3 if you play it with your High Definition TV. It will also comprise the Blu-ray drive, now an ordinary for both premium and basic configurations.

The truth that PlayStation 3 is so hot, many websites these days are now offering preorders for PlayStation 3. And, because they require customers, many web sites are now also offering freebies for their buyers. They give free PlayStation 3 accessories or games to attract more people to pay for the PlayStation 3 from them.

Because there are likewise people who are capitalizing on the internet to promote scams, it is important that you should examine the web site that you will endeavor ordering your PlayStation 3 from. mediapost.com Determine if the internet site is a legitimate PlayStation 3 retailer and will give you a genuine PlayStation 3 and PlayStation 3 accessories and games.

PlayStation 3 can definitely present you with the best quality gaming experienced. Nonetheless, PlayStation 3 is equipped with the most recent technology in game consoles, such as graphics chip, processors, and other features. As a result of this, you can expect that the PlayStation 3 gaming console will be rather high priced. If you would like a PlayStation 3 that will provide the best quality for your income, search for PlayStation 3 official release promotions to fully capitalize of the game console. You can also obtain discounts from various PlayStation 3 retail outlets if they permit it.

PlayStation 3 is definitely the gaming console of your choice. Obviously, with all the features and the freebies that some Sony PlayStation retail outlet offers, you should test and obtain PlayStation 3 from a retail outlet that will present you with full value for your money.

Also, before you obtain your personal PlayStation 3, you should be aware that there are two basic configurations that you can purchase, there’s the elementary configuration and the premium configuration. In basic configuration, it will cost US$499 in the United States and US$599 for the premium configuration. The gap is that the premium will have more features than the standard configuration, like the 60GB upgradeable hard drive, intrinsic Wi-Fi, and Flash card readers.

In basic configuration, you can upgrade at your personal pace if you can’t give to add at US$100 for the premium configuration email.cafemom.com just yet. These are the the points you should remember when you are going to obtain the PlayStation 3.

https://www.playstation.com/en-us/explore/games/free-to-play/

Analyzing Search Engine Results Pages on a Large Scale

Analyzing Search Engine Results Pages on a Large Scale

As an SEO professional, you know that a big part of your job is tracking rankings in the SERPs, as well as that of those of competitors’. I am going to be sharing a way to obtain SERP data and import it to a DataFrame (table / CSV / excel sheet) for analysis, on a large scale, and in an automated way.

I will be using the programming language Python, so there will be some coding involved. If you don’t know any programming, you can ignore the code snippets below, as you don’t need to understand them to follow along.

So how exactly are we going to procure the data, and what are we going to do with it? Let’s find out.

Importing the Data

Google’s Custom Search Engine is a service that allows you to create your own customized search engine, where you can specify the sites you want crawled and set your own relevancy rules (if you don’t set any specific rules, then your custom search engine will essentially search the whole web by default).

You can further streamline your efforts by specifying parameters for your search queries, including the location of the user, the language of the site, image search, and much more.

You can also programmatically pull the data through its API, which is what we are going to do here.

Here are the steps to set up an account to import data (skip if you don’t want to run the code yourself):

  1. Create a custom search engine. At first, you might be asked to enter a site to search. Enter any domain, then go to the control panel and remove it. Make sure you enable “Search the entire web” and image search. You will also need to get your search engine ID, which you can find on the control panel page.
  2. Enable the custom search API. The service will allow you to retrieve and display search results from your custom search engine programmatically. You will need to create a project for this first.
  3. Create credentials for this project so you can get your key.
  4. Enable billing for your project if you want to run more than 100 queries per day. The first 100 queries are free; then for each additional 1,000 queries, you pay USD $5.

Programming Environment

For this analysis, I will be using the Jupyter Notebook as my editor. If you are not familiar with it, it is basically a browser-based tool that combines regular text, programming code, as well as the output of the code that is run. The output could be text, tables, as well as images (which could be data visualizations as you will see below).
You can try it out here to see how it works if you are interested.

It looks a lot like a word processor, and it is great for running analyses, creating campaigns, and general programming work. I use it for almost all my work.

By allowing you to store the steps that you made by keeping a copy of the code, the notebook enables you to go back and see how you came to your conclusions, and whether or not there are errors or areas that need improvement. It further allows others to work on the analysis from the point where you left off.

The notebook contains text boxes that are referred to as “cells”, and this is where you can enter regular text or code. Here is how it renders, including a brief description of the cells.

Jupyter Notebook code and markdown cellsJupyter Notebook code and markdown cells

Here is a simple data visualization, and an explanation of what the code does below it:
Jupyter Notebook data visualizationJupyter Notebook data visualization

Line 1: Importing a package simply makes it available, like starting an application on your computer. Here we are importing a package called “matplotlib” with the sub module “pyplot” which is used for data visualization. As its name is quite long and tedious to type every time we want to run a command, we import it as “plt” as a shortcut, which you can see in lines 6 – 10.

Any time we want to run a command from matplotlib, we simply say “plt.<command_name>” to do that, for example, “plt.plot”.

Lines 3 and 4: Here we define two variables, “x” and “y”, where each is a list of numbers that we want to plot. The “=” sign here is an ‘assignment’ operator, and not used as in the mathematical sense. You are defining a list of numbers [1, 2, 3, 4, 5] in this case, and calling it “x”. Think of it as a shortcut, so you know what this list is so you don’t have to type all the numbers every time.

Line 6: We simply plot “x” and “y”.

Lines 7, 8, and 9: Here we add a few options, and there are a number of available options as you will see later in the tutorial. The results of those settings can be seen in the chart as the title, “xlabel”, and “ylabel”.

Line 10: shows the plot.

In “[5]”: This identifies the cell as an input cell. This was the fifth instruction that I ran, and that’s why it shows the number 5.

Running those lines of code produces the chart above.

Below I have added links to all data sets used, as well as the notebook for this tutorial if you are interested, in one repository.  All the steps that I took to import the data, manipulate it, and visualize it are included here, so you will see the entirety of the code that produced these results. As mentioned, if you are not familiar with Python, you can ignore the code and follow along.

Handling the Data

We will be using three Python packages for our work:

  • advertools: To connect to the Google CSE API and receive SERPs in a table format. (This is a package that I wrote and maintain. It has a bunch of productivity and analysis tools for online marketing.)
  • pandas: For data manipulation, reshaping, merging, sorting, etc.
  • matplotlib: For data visualization.

To give you a quick idea, here is a sample from the customized SERP results that we will be working with:

import pandas as pd
serp_flights = pd.read_csv('serp_flights.csv')
serp_flights.head()

Google search engine results pages screen shotSERP results in an Excel sheet sample

Not all columns are visible, but I would like to share a few notes on the different columns available:

“queryTime” is the time that the query was run (when I made the request); this is different from “searchTime”, which is the amount of time it took Google to run the query (usually less than one second). Most of the main columns will always be there, but if you pass different parameters, you will have more or fewer columns. For example, you would have columns describing the images, in case you specify the type of search to be “image”.

The Dataset

We are going to take a look at the airline’s tickets industry, and here are the details:

  • Destinations: I obtained the top 100 flight destinations from Wikipedia and used them as the basis for the queries.
  • Keywords: Each destination was prepended with two variations, so we will be looking at “flights to destination” and “tickets to destination”.
  • Countries: Each variation of those was requested for one of two English-speaking countries, The United States and The United Kingdom.
  • SERPs: Naturally, each result contains ten links, together with their metadata.

 As a result we have 100 destinations x 2 variations x 2 countries x 10 results = 4,000 rows of data.

We begin by importing the packages that we will use, and defining our Google CSE ID and key:

%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import advertools as adv
import pandas as pd
pd.set_option('display.max_columns', None)
cx = 'YOUR_GOOGLE_CUSTOM_SEARCH_ENGINE_ID'
key = 'YOUR_GOOGLE_DEVELOPER_KEY'

Now we can import the Wikipedia table that shows the top destinations, along with some additional data:

top_dest = pd.read_html('https://en.wikipedia.org/wiki/List_of_cities_by_international_visitors',
header=0)[0]
top_dest.head().style.format({'Arrivals 2016Euromonitor': '{:,}'})

Wikipedia's top travel destinationsTop tourist destinations

Next, we can create the keywords by concatenating the two variations mentioned above:

cities = top_dest['City'].tolist()
queries = ['flights to ' + c.lower() for c in cities] + ['tickets to ' + c.lower() for c in cities]
queries[:3] + queries[-3:] + ['etc...']
['flights to hong kong',
'flights to bangkok',
'flights to london',
'tickets to washington d.c.',
'tickets to chiba',
'tickets to nice',
'etc...']

With the main parameters defined, we can now send the requests to Google as follows:

serp_flights = adv.serp_goog(cx=cx, key=key, q=queries, gl=['us', 'uk'])# imports data
serp_flights = pd.read_csv('serp_flights.csv',parse_dates=['queryTime'])# saves it in a csv file
serp_us = serp_flights[serp_flights['gl'] == 'us'].copy() # create a subset for US
serp_uk = serp_flights[serp_flights['gl'] == 'uk'].copy() # create a subset for UK

Let’s now take a quick look at the top domains:

print('Domain Summary - Overall')
(serp_flights
.pivot_table('rank', 'displayLink',
aggfunc=['count', 'mean'])
.sort_values([('count', 'rank'), ('mean', 'rank')],
ascending=[False, True])
.assign(coverage=lambda df: df[('count', 'rank')] / len(serp_flights)*10)
.head(10).style.format({("coverage", ''): "{:.1%}",
('mean', 'rank'): '{:.2f}'}))

Top 10 domains - flights and ticketsTop 10 domains – flights and tickets

As you see, since we are mainly interested in the ranking of domains we have it summarized by three main metrics:

1. Count: the number of times that the domain appeared in the searches that we made.

2Mean: the mean (average) rank of each of the domains.

3. Coverage: the count divided by the number of queries.

The above pivot table is for all the results and gives us a quick overview. However, I think it is more meaningful to split the data into two different pivot tables, one for each of the countries:

print('Domain Summary - US')
(serp_flights[serp_flights['gl']=='us']
.pivot_table('rank', 'displayLink',
aggfunc=['count', 'mean'])
.sort_values([('count', 'rank'), ('mean', 'rank')],
ascending=[False, True])
.assign(coverage=lambda df: df[('count', 'rank')] / len(serp_flights)*10 * 2)
.head(10).style.format({("coverage", ''): "{:.1%}",
('mean', 'rank'): '{:.2f}'}))

Top 10 domains - flights and tickets United StatesTop 10 domains – flights and tickets United States

For coverage, I divided by 400 in the first table, but for the countries, I am dividing by 200, because we are interested in queries for that country. An interesting point here is that kayak.com has lower coverage than tripadvisor.com, but it has a higher mean rank. In top positions, the difference between position two and three is quite high in terms of value. Depending on your case, you might value one metric or the other.

print('Domain Summary - UK')
(serp_flights[serp_flights['gl']=='uk']
.pivot_table('rank', 'displayLink',
aggfunc=['count', 'mean'])
.sort_values([('count', 'rank'), ('mean', 'rank')],
ascending=[False, True])
.assign(coverage=lambda df: df[('count', 'rank')] / len(serp_flights)*10*2)
.head(10).style.format({("coverage", ''): "{:.1%}",
('mean', 'rank'): '{:.2f}'}))

Top 10 domains flights and tickets - United KingdomTop 10 domains flights and tickets – United Kingdom

Having a coverage of 108% means that skyskanner.net has appeared on all searches, and in some cases, they appeared more than once in the same SERP. Note that their mean rank is 1.45, much higher than the second domain. No joking with SkySkanner!

Now that we have an idea about the number of times the websites appeared in search and the average ranks they have, it might also be good to visualize the data, so we can see how it is distributed.

To determine this, we first get the top 10 domains for each country, and define two new DataFrames (tables) containing only the filtered data, and then visualize:

top10_domains = serp_flights.displayLink.value_counts()[:10].index
top10_df = serp_flights[serp_flights['displayLink'].isin(top10_domains)]
top10_domains_us = serp_us.displayLink.value_counts()[:10].index
top10_df_us = serp_flights[serp_flights['displayLink'].isin(top10_domains_us)]
top10_domains_uk = serp_uk.displayLink.value_counts()[:10].index
top10_df_uk = serp_flights[serp_flights['displayLink'].isin(top10_domains_uk)]
fig, ax = plt.subplots(facecolor='#ebebeb')
fig.set_size_inches(15, 9)
ax.set_frame_on(False)
ax.scatter(top10_df['displayLink'].str.replace('www.', ''),
top10_df['rank'], s=850, alpha=0.02, edgecolor='k', lw=2)
ax.grid(alpha=0.25)
ax.invert_yaxis()
ax.yaxis.set_ticks(range(1, 11))
ax.tick_params(labelsize=15, rotation=9, labeltop=True,
labelbottom=False)
ax.set_ylabel('Search engine results page rank', fontsize=16)
ax.set_title('Top 10 Tickets and Flights Domains', pad=75, fontsize=24)
ax.text(4.5, -0.5, 'Organic Search Rankings for 200 Keywords in US & UK',
ha='center', fontsize=15)
fig.savefig(ax.get_title() + '.png',
facecolor='#eeeeee', dpi=150, bbox_inches='tight')
plt.show()

Top 10 tickets and flights SERP visualizationTop 10 tickets and flights SERP visualization

For each appearance on a SERP, we plot a very light circle in the position where that domain appeared (from one to ten). The more frequently a domain appears, the darker the circle. For example, kayak.com, expedia.com, and skyskanner.net have solid blue circles on position one, as well as lighter ones in different positions.

A minor issue in this analysis so far is that it treats all keywords equally. The number of tourists in the top one hundred list varies between 2 and 26 million, so they are clearly not equal. Also, for your specific case, you might have your own set of the “top 100” based on the website you are working on. But since we are exploring the industry and trying to understand the positions of the different players, I don’t think it is a bad assumption. Just keep this in mind when doing a similar analysis for a specific case.

As above, this was for the overall data, and below is the same visualization split by country:

top10_dfs = [top10_df_us, top10_df_uk]
colors = ['darkred', 'olive']
suffixes = [' - US', ' - UK']
fig, ax = plt.subplots(2, 1, facecolor='#ebebeb')
fig.set_size_inches(15, 18)
for i in range(2):
ax[i].set_frame_on(False)
ax[i].scatter(top10_dfs[i]['displayLink'].str.replace('www.', ''),
top10_dfs[i]['rank'], s=850, alpha=0.02,
edgecolor='k', lw=2, color='darkred')
ax[i].grid(alpha=0.25)
ax[i].invert_yaxis()
ax[i].yaxis.set_ticks(range(1, 11))
ax[i].tick_params(labelsize=15, rotation=12, labeltop=True,
labelbottom=False)
ax[i].set_ylabel('Search engine results page rank', fontsize=16)
ax[i].set_title('Top 10 Tickets and Flights Domains' + suffixes[i],
pad=75, fontsize=24)
ax[i].text(4.5, -0.5, 'Organic Search Rankings for 200 Keywords',
ha='center', fontsize=15)
plt.tight_layout()
fig.savefig(ax[i].get_title() + '.png',
facecolor='#eeeeee', dpi=150, bbox_inches='tight')
plt.show()

Top 10 tickets and flights SERP visualization: US vs UKTop 10 tickets and flights SERP visualization: US vs UK

Content Quantity

Another important metric you might be interested in is how many pages each domain has for the different cities. Assuming the content is real, and with a minimum level of quality, it follows that the more content you have, the more likely you are to appear on SERPs — especially for keyword variations and the different combinations users can think of.

One of the parameters of the request allowed by Google is specifying the site you want to search in, and you have the option to include or exclude that site. So if we search for “tickets to hong kong” and specify “siteSearch=www.tripadvisor.com” with “siteSearchFilter=i” (for “include”) we will get the search results restricted to that site only. An important column that comes together with every response is “totalResults”, which shows how many pages Google has for that query. Since that query is restricted to a certain domain and is for a specific keyword, we can figure out how many pages that domain has that are eligible to appear for that keyword.

I ran the queries for the top five destinations, and for the two countries:

pagesperdomain_us = adv.serp_goog(cx=cx, key=key, q=queries[:5],
siteSearch=top10_domains_us.tolist(),
siteSearchFilter='i', num=1)
pagesperdomain_uk = adv.serp_goog(cx=cx, key=key, q=queries[:5],
siteSearch=top10_domains_uk.tolist() ,
siteSearchFilter='i', num=1)

Here are the first ten results from the US for “flights to hong kong”, and below that is a visualization for each of the keywords and the destination countries:

(pagesperdomain_us
[['searchTerms', 'displayLink', 'totalResults']]
.head(10)
.style.format({'totalResults': '{:,}'}))

SERP number of results flights to hong kongSERP number of results

from matplotlib.cm import tab10
from matplotlib.ticker import EngFormatter
fig, ax = plt.subplots(5, 2, facecolor='#eeeeee')
fig.set_size_inches(17, 20)
countries = [' - US', ' - UK']
pages_df = [pagesperdomain_us, pagesperdomain_uk]
for i in range(5):
for j in range(2):
ax[i, j].set_frame_on(False)
ax[i, j].barh((pages_df[j][pages_df[j]['searchTerms']== queries[i]]
.sort_values('totalResults')['displayLink']
.str.replace('www.', '')),
(pages_df[j][pages_df[j]['searchTerms']== queries[i]]
.sort_values('totalResults')['totalResults']),
color=tab10.colors[i+5*j])
ax[i, j].grid(axis='x')
ax[i, j].set_title('Pages per domain. Keyword: "' +queries[i] + '"' + countries[j],
fontsize=15)
ax[i, j].tick_params(labelsize=12)
ax[i, j].xaxis.set_major_formatter(EngFormatter())
plt.tight_layout()
fig.savefig('Pages per domain' + '.png',
facecolor='#eeeeee', dpi=150, bbox_inches='tight')
plt.show()

Number of pages per domainNumber of pages per domain

As you can see, the difference can be dramatic in some cases, and it does not always correlate with top positions. Feel free to analyze further, or try other keywords if you are interested.

Analyzing Titles

There are many ways to analyze titles (and snippets), but in this case, one particular thing caught my attention, and I think it is very important in this industry. Many sites have the price of the tickets in the title of the page, which is not only visible in SERPs but is one of the most important factors that either encourage or discourage people to click.

For example:

serp_flights[serp_flights['searchTerms'] == 'flights to paris'][['searchTerms', 'title']].head(10)

Flights to Paris SERP titlesFlights to Paris SERP titles

Let’s now extract the prices and currencies, so we can do further analysis.

serp_flights['price'] = (serp_flights['title']
.str.extract('[$£](\d+,?\d+\.?\d+)')[0]
.str.replace(',', '').astype(float))
serp_flights['currency'] = serp_flights['title'].str.extract('([$£])')
serp_flights[['searchTerms', 'title', 'price', 'currency', 'displayLink']].head(15)

SERP titles with pricesSERP titles with prices

Now we have two new columns, “price” and “currency”. In some cases, there is no price in the title (“NaN” for not a number), but for others, there are dollar and pound signs. Some sites also display the prices in other currencies, but because they are very small in number, it doesn’t make sense to compare those, especially when there are big differences in their values. So, we will only be dealing with dollars and pounds.

For the top five queries, we can plot the different prices (where available), and get a quick overview of how the prices compare.

Here is a quick price comparison engine for you:

fig, ax = plt.subplots(5, 2, facecolor='#eeeeee')
fig.set_size_inches(17, 20)
countries = [' - US ($)', ' - UK (£)']
country_codes = ['us', 'uk']
currency = ['$', '£']
top10dfs = [top10_domains_us, top10_domains_uk]
for i in range(5):
for j in range(2):
ax[i, j].grid()
ax[i, j].set_frame_on(False)
df = serp_flights[(serp_flights['gl'] == country_codes[j]) &
(serp_flights['searchTerms'] == queries[i]) &
(serp_flights['currency'] == currency[j])]
for country in top10dfs[j]:
ax[i, j].scatter(df.sort_values('totalResults')['displayLink'].str.replace('www.', ''),
df.sort_values('totalResults')['price'],
color=tab10.colors[i+5*j], s=300)
ax[i, j].set_title('Price per domain. Keyword: "' +queries[i] + '"' + countries[j],
fontsize=15)
ax[i, j].tick_params(labelsize=12, rotation=9, axis='x')
plt.tight_layout()
fig.savefig('Prices per domain' + '.png',
facecolor='#eeeeee', dpi=150, bbox_inches='tight')
plt.show()

Ticket prices per domainTicket prices per domain

To get a general overview of pricing for the top domains, we can also plot all instances where a price appears in the title of a SERP, so we can see how prices compare overall by domain:

fig, ax = plt.subplots(1, 2, facecolor='#eeeeee')
fig.set_size_inches(17, 8)
countries = [' - US ($)', ' - UK (£)']
country_codes = ['us', 'uk']
currency = ['$', '£']
top10dfs = [top10_domains_us, top10_domains_uk]
for j in range(2):
ax[j].grid()
ax[j].set_frame_on(False)
df = serp_flights[(serp_flights['gl'] == country_codes[j]) &
(serp_flights['currency'] == currency[j]) &
(serp_flights['displayLink'].isin(top10dfs[j]))]
ax[j].scatter(df.sort_values('totalResults')['displayLink'].str.replace('www.', ''),
df.sort_values('totalResults')['price'] ,
color=tab10.colors[j],
s=300, alpha=0.1)
ax[j].set_title('Prices per domain'+ countries[j],
fontsize=21)
ax[j].tick_params(labelsize=18, rotation=18, axis='x')
ax[j].tick_params(labelsize=18, axis='y')
plt.tight_layout()
fig.savefig('Prices per country' + '.png',
facecolor='#eeeeee', dpi=150, bbox_inches='tight')
plt.show()

Ticket prices per domain per countryTicket prices per domain per country

In the US, expedia.com clearly has lower prices on average, and a good portion of them are below $200. Tripadvisor.com seems to be the highest on average, but they also have a higher range of fluctuation compared to others. Opodo.co.uk is clearly the cheapest for the UK, with almost all its prices below £200.

Keep in mind that the two charts have different Y axes and show prices with different currencies. At the time of writing the GBP is around $1.30; this does not necessarily mean that expedia.com actually has lower prices, as it could be based on “starting from” or premised on certain conditions, etc. But these are their advertised prices on SERPs.

Peeking at Snippets

As with titles, we can do a similar analysis of snippets. One site caught my attention with the text of their snippets, and that is kayak.com.

Below is a sample of their snippets. Note that they mention airlines’ names, prices, and destination cities, even though the queries do not indicate where the user is flying from. Note also that they are different for each query. For the destination of Hong Kong, they specify flights from San Francisco and New York, while for the destination of Dubai they specify New York, Chicago, and Orlando.

It seems that they have the text of the snippets dynamically generated based on the most frequent places people buy tickets from, and the airlines they use for those destinations; this could be an interesting insight into the market, or at least on Kayak’s view of the market and how they position themselves. You might want to export the Kayak snippets and generate a mapping between source and destination cities, as well as the airlines that they are most frequently associated with.

serp_flights[serp_flights['displayLink'] == 'www.kayak.com'][['snippet']][:10]`

Snippet text in SERPsSnippet text in SERPs

Final Thoughts

This article was a quick overview of how Google’s Custom Search Engine can be used to automate a large number of reports and a few ideas on what can be analyzed.

There are other things you might consider as well:

  • Run the same report periodically: Usually, we are not interested solely in a snapshot of where we stand. We are interested in knowing how our pages perform across time. So you might run the same report once a month, for example, and produce charts showing how positions are changing in time.
  • Assign weights to different destinations: As mentioned above, we are assuming that all destinations are equal in value, but that is usually not the case. Try adding your own weights to each destination, maybe by taking into consideration the number of annual visitors mentioned in the table, or by utilizing your own conversion / sales / profitability data.
  • Try other keywords and combinations: Travel is one of the most complicated industries when it comes to generating and researching keywords. There are so many ways to express desire in traveling to a place (for instance, New York, New York City, NY, NYC, JFK, all mean the same thing when it comes to travel). Note that we did not specify a “from” city, which makes a huge difference. Try “travel”, “holidays” and/or pricing-related keywords.
  • Try doing the same for YouTube SERPs: advertools has a similar function for extracting similar data for videos. YouTube data is much richer because it includes data about video views, ratings, number of comments, metadata about the channel, and much more.
  • Build on this notebook: Instead of re-inventing the wheel, you can get a copy of the code and data, or you can explore the interactive version online (you will need to have your own Google CSE keys) and run a different analysis, or produce different visualizations. I’d love to see other ideas or approaches.

Good luck!

https://www.semrush.com/blog/analyzing-search-engine-results-pages/